In this issue we will explore how Envoy® Server puts to use the concept of “resource management” in several ways that lead to remarkable performance, flexibility, and feature set.
First things first though – what is resource management? Well, it is management of one or more resources, of course. Hmmm, let me try that again. If a “resource” is something that you have in limited supply, and if demands for that resource outnumber the resource count, you must then in some way “manage” the exposure of the resource in order to satisfy the demands in an efficient manner. So for instance, you may have plenty of people with coffee mugs but there is only one coffee pot. In this scenario you might want to make sure the coffee drinkers “take turns” retrieving their drink so that everyone has a chance to access the “resource” of the coffee pot. Now you could take ordered or random turns, enforce “one cup only” rules, or let the thirstiest get the most coffee. Regardless of the implementation, employing resource management allows one coffee pot to be used by several different people that want to have a drink; otherwise, without management, a rush on the single coffee pot could get ugly with no one receiving any coffee – truly an office nightmare. Note that while the coffee is being poured, it cannot be accessed by a different consumer; the next drinker has to wait until the previous is done pouring – this is called “locking” a resource. Resource locking basically means we do not break any “rules” associated with the given resource – in our example the coffee pot only pours in one mug at a time. Good resource management makes the locking imposed upon the requestors occur as little as possible or even completely disappear; in our example locking is reduced by adding a second pot, so one pot is brewed while the other one is being poured out…
So what are the resources that Envoy® Server has to manage?
- Memory is something that most software providers no longer pay much attention to because it is considered “inexpensive” so the vendor application is programmed based upon the premise that memory is just “allocated on demand” as needed. But wait a minute – doesn’t a piece of hardware have limited memory on board? And doesn’t the owner of the hardware still need to buy that memory? Yes!! Envoy® Server manages memory carefully – it not only is written to be very memory efficient, it also has some built-in memory allocation and heap management routines. These advanced memory resource management techniques lead to more efficient use of memory while also improving performance and reducing fragmentation. Going back to our example above – if there are three available coffee pots and the “brew on demand” (i.e. need a clean pot for each brew) model were used, we would hit a hard limit of three brew cycles. Compare that with a managed approach on the other hand that used a routine where pots are “recycled” (i.e. cleaned when not in a pour or brew cycle); in this scenario we can keep reusing pots very efficiently as they are emptied. The employment of memory resource management makes the Envoy® framework very efficient, which in turn will keep you from having to deploy as many separate hardware servers.
- Network connections is another area where many vendors’ applications use a model of “allocation on demand” whenever there is a need to communicate with a new destination. Sure, the connection has to be made, but why incur a performance hit associated with creating a new connection to the destination every single time? If the application provides connectivity services, then connections are needed constantly – so why create and destroy them repeatedly? Again through the use of resource management, the connections can be “recycled” at a much lower performance cost than a build / tear down iterative process. Envoy® Server uses many allocation strategies as well as the concept of shared connections (pooling of similarly tasked connections) in order to make very efficient use of network connections. Inbound connection requests to Envoy® Server are handled in a highly optimized manner that eliminates processor overhead associated with “waiting” for potential requests and instead targets processor cycles only for inbound requests that have already presented themselves to the server. Outbound connections are pooled and therefore allow multiple different plugins to use the same connections rather than every plugin establishing its own set of connections. The resource management of connections within Envoy® Server maps closely to the memory resource management, with the result again being more performance at a lower overall penalty to the machine. As a solution scales, these performance efficiencies present themselves in terms of less hardware requirements.
In addition to the physical resource savings detailed above, there are strategic savings to be had by using the Envoy® framework as well. Envoy® Server can dynamically manage a pool of backend system connections for interested desktop applications; this becomes extremely important when connectivity involves expensive per seat licensing. Envoy® Server can also be used effectively to throttle demand on backend resources in the event this amount of granular control is a requirement. Finally, time investment can be a huge savings by using generic XML messaging – the simple and portable protocol used by the Envoy® framework – to do all your application data access and resource monitoring activities. The Envoy® framework is completely vendor neutral so developing on one platform and deploying to another – or having to upgrade in the middle of development to a new platform – becomes a much less costly affair when code can be moved rather than designed and written again. Pour yourself a “cup of joe” and think about the potential savings!
Stay tuned for additional “Envoy® Updates” in the next issue of the VEXIS Voice.
Richard Wolff, Director of Software R&D