How Microsoft Dynamics GP’s 3-tier Architecture Deployment Can Resolve Performance Issues

Most Microsoft Dynamics GP installations are scripted as follow: Microsoft SQL Server is installed on the designated database server; the Dynamics GP server client application is installed and all company database objects are created on the database server; user workstations are installed and configured. In addition, standard database and application maintenance procedures are put in place, and may even go as far as setting up a few database and application maintenance plans to address regular maintenance operations.

...

Requires FREE Membership to View

Login
Become a Member Joining MSDynamicsWorld.com gives you free, unlimited access to news, analysis, white papers, case studies, product brochures, and more, and it’s all FREE. You’ll also receive periodic email newsletters with the latest relevant articles and content updates.
About Mariano Gomez

Mariano Gomez is a Microsoft MVP, PMP and EVP for Midmarket Solutions at Intelligent Partnerships, LLC. He is the original developer of the Microsoft Dynamics GP Spanish release for Latin America and has been consulting and implementing technology solutions for organizations across the United States, the Caribbean, and Latin America for the last 20 years. Mariano holds an MIS degree from the University of Phoenix.

About Intelligent Partnerships, LLC

With over 150 years of combined management and technology consulting experience, Intelligent Partnerships skillfully partners with organizations to solve complex problems, boost operating performance and maximize value for stakeholders. 

Read full bio...

GP DPS can be useful but only for a limited number of tasks.

GP DPS and DPM can be useful tools but only for a limited number of processes, essentially for batch posting. There is no way to add additional tasks or reports to the DPS. No remote process exists for Manufacturing, which we would desperately need; for manufacturing receipts for instance.

If a user constantly post "big" batches, it will free the client and this is definitely a plus for that user. But if you don't post batches with a large amount of transactions, from a user perspective, the overall performance of GP won't be improved that much. Unfortunately, most of the processing logic in GP, the business rules, are not on the database side yet, they still reside on the GP client. At the end, you may end up with the need to "add more processors and add more memory" on the client side...

Denis L'Heureux

DPS/DPM useful in very limited scenarios

I completely agree with Denis. Our experience with these services was very mixed. We saw minor success in the same scenarios that Denis described. We've had between 50-100 users on GP for the last 5 years. We've implemented and pulled back DPS twice, both times because we saw better performance gains with adding more memory or upgrading our servers -- which we do on a regular 3-5 year cycle anyway. Why Microsoft has not transferred more processing to an "application server" role, separate from client and db, I'll never know, and the fact that so much business logic still lives in the client are two reasons why larger companies avoid GP even though the feature set meets their needs -- it's just not robust enough. I guess that's why they bough Navision to offer Axapta as a growth path.

Steve Wales

DPS/DPM useful in very limited scenarios

Steve,

Thanks for the follow up. For organizations on a 3-5 year technology replacement cycle, the DPS/DMP components may not make any difference, since they are always taking advantage of new and faster developments in hardware.

I am certainly not advertising DPS/DPM as actual replacements for physical hardware, rather one more tool in the arsenal available to customers before taking the route of more memory, more processors, and/or new servers.

I won't debate the fact that a lot of business logic reside on the GP interface, however, I have seen over the years how this logic has found its way to stored procedures and functions on the RDBMS. Unfortunately, there is way too much code that integrates with GP -- I call this 'The irony of Third-Party development' -- that still uses methods and interfaces implemented in the standard GP code base. This is where the irony resides, since the standard GP methods and interfaces need to remain in place in order to accomodate the ISVs integrating code.

Also, with the focus currently on the extended ERP environment (Office, SharePoint, BI), I don't believe any efforts will be made to move this code to the back end. It would be a daunting task and perhaps years and years of development, service packs, and hot fixes to accomplish this.

Best regards,

MG.-
Mariano Gomez, MVP
Maximum Global Business, LLC
http://www.maximumglobalbusiness.com

GP DPS can be useful but only for a limited number of tasks

Denis,

I appreciate your time and input to the article. Unfortunately, not all Dynamics GP add-on modules followed the same development path -- Manufacturing, Project Accounting, and Field Service for example.

Let's take Manufacturing. As you may know, the Manufacturing module was among the first integrating products acquired by the then Great Plains Software. While Microsoft has gone to great legths to improve the integration to the rest of the GP system, the fact is certain processes that could take advantage of DPS/DPM have been left behind -- It would be great if you could execute MRP runs remotely on a DPS for example.

There are a number of technical limitations that prevent certain processes from being executed remotely on a DPS/DPM, but these limitations are what I consider 'self imposed' limitations.

However, I have been a part of large implementations involving thousands of receivables and payables documents and very limited budgets where buying more memory, processors, or new servers were not always an option and I am seeing a trend in this down turn to go back to DPS/DPM as a way to improve performance in certain areas.

I will be the first one to say that DPS/DPM are not a performance panacea, but worth always giving it a try.

Best regards,

MG.-
Mariano Gomez, MVP
Maximum Global Business, LLC
http://www.maximumglobalbusiness.com

GP DPS can be useful but only for a limited number of tasks

Thank you Mariano for your reply.

I agree that large implementations could benefit of DPS/DPM in certain areas. It might even represent a significant productivity gain in some situations. That said, I think DPS/DPM don’t really improve the performance of the system (operations are not computed faster) but people are more efficient if they don’t have to wait for the completion of a task.

To improve performance, Microsoft should definitely put some efforts to move the code to the back end. Especially if the focus, as you said in your reply to Steve, is currently on the extended ERP environment. That would actually open the door to new functionalities or new web-based applications for instance. ISVs would certainly need a transition period to adapt their code but there’s no gain without pain... The gains would be substantial though for the customers.

The Great Plains system was initially coded to work with multiple database vendors. That’s probably one of the reasons why the system uses so much resources. Why it creates and manages its own database locks... Just start a trace with SQL Profiler and then start GP. It’s very ugly! With SQLServer as the RDBMS, Microsoft should clean up the code and take the opportunity to move the code, slowly but surely, to the back end.

Speaking of Manufacturing and MRP, with GP10 Microsoft did something really good. They moved the MRP processing code from the GP interface to a stored procedure on SQLServer. Now MRP tasks run much faster and they can be scheduled on the RDBMS to run at night. Wow! That was a huge improvement for us (several companies and sites, 80,000 items, etc...). Now let’s hope that Microsoft will continue to improve the integration to the rest of the GP system, including eConnect support for the Manufacturing module!

Best Regards,
Denis L'Heureux

DPS versus Load Balancing Terminal Servers

In your opinion, what would be the advantage of using multiple DPS servers versus load-balancing multiple terminal servers that are GP clients? Does DPS actually load-balance based upon the intensity of the task or is it the same as Windows load-balancing (looking at number of users)?

DPS versus Load Balancing Terminal Servers

Denni,

Thanks for the follow up!

DPM performs load balance based on the availability of any given DPS in the service. This is, if you have 4 DPS under a DPM, and 3 are busy processing a number of tasks, DPM will submit the process to the one DPS that is currently idle. Coversly, if all DPS are busy in the service, DPM will queue the task and release to the first DPS to become available.

Windows Terminal Servers and Citrix servers perform load balancing based on the number of users that are currently assigned to each server. Keep in mind that not all users login in into a Terminal Server will be necessarily working on GP, hence, some servers may run at a lower processing capacity even though the maximum number of users have been reached.

Assuming all Terminal Servers or Citrix servers participating in a load balancing situation are strictly serving up GP -- quite an expensive proposition -- you still cannot guarantee that all resources (memory, processors, disks) on each server are being fully utilized. Utilization will depend on the processes being executed by all users logged on onto any given server.

Best regards,

MG.-
Mariano Gomez, MIS, PMP, MVP, MCP
Maximum Global Business, LLC
http://www.maximumglobalbusiness.com

DPS versus Load Balancing Terminal Servers

Denni,

DPM balances processing load based on the availability of any given DPS under the service. If you have four DPS under a service and three of them are currently processing various tasks submitted by different clients, DPM will assign any incoming process to the available DPS. Coversly, If all DPSs are busy at the same time, DPM will queue the incoming task and assign it to the first DPS that becomes available. DPM does not make a 'judgement' based on hardware resources (memory, processors, disks), but rather on whether or not a DPS is idle or busy. So, for tasks that you consider extremely intense, you may want to configure that task under Process Server to use a dedicated DPS.

In a Terminal Server and Citrix server load balanced environment, a server is assigned based on the maximum number of connections given to that server within the farm. It's worth noting that in load balanced TS and Citrix environments, all users logged on to a particular server may not exclusively be working in GP. This means that not all servers resources are performing at the same capacity. But, if we were to assume that all TS servers were dedicated to GP, then, and even so, you won't find any two servers performing at the same capacity, since each user is certain to be running different processes -- for example, someone printing checks versus someone inquiring customer information.

Finally, load balanced Terminal Servers and Citrix servers are typical in environments with a large number of mobile and geographically dispersed business users. The key driver in these environments is to keep application maintenance and support at a minimum while offering high availability and maximizing the infrastructure.

DPM and DPS are just tools within the arsenal that allow the distribution of certain processes to other machines, given extra room for the client application to perform other operations. I cannot say there is an advantage or disadvantage to TS load balancing over Process Server or viceversa since they both serve different purposes from an enterprise perspective.

Best regards,

MG.-
Mariano Gomez, MIS, PMP, MCP, MVP
Maximum Global Business, LLC
http://www.maximumglobalbusiness.com

minivan