Service Discovery in Service Oriented Architectures

In Service Oriented Architectures/Micro Service Architectures, we are seeing an application built as a number of restful services which do one thing great. Since that one thing could be compute /any other resource intense, it makes sense to scale each service independently. From the end user’s perspective he is interested in the end result, which is all of these services working together to deliver a result. So each service would end up calling one or many other services

As a result, In SOA/distributed systems, services need to find each other. for example, a web service might need to find a caching service or another mid-tier component service etc. A Service Discovery system should provide a mechanism for
• Service Registration
• Service Discovery
• Handling Fail over of service instances
• Load balancing across multiple instances of a Service
• Handling issues arising due to unreliable network.

Other aspects to consider when choosing or implementing a service registration/discovery solution
• Integrating service registration and discovery into the application or using a sidekick process
• Application Software stack compatibility with service discovery solution
• Handling Failure/outage of service discovery solution itself

1. Using a DNS

The simplest solution to registration and discovery is to just put all of your backends behind a single DNS name. To address a service, you contact it by DNS name and the request should get to a random backend.
Details of how to use DNS for service registration and discovery can be found in the links below.

On our own infrastructure, we can use dynamic DNS backends like BIND-DLZ for registration. In the cloud, with hosted DNS like Route53, simple API calls suffice. In AWS, if we use round-robin CNAME records, the same records would work from both inside and outside AWS.

Advantages

1.   Easy to implement and has been there for a long time.

Disadvantages

1.   Service Instances have to poll for all changes – there’s no way to push state. A monitoring component has to be implemented which will detect server failures or additions and propagates the state changes to the consumers.

2.   DNS suffers from propagation delays; even after a server failure is detected a de-registration command issued to DNS, there will be at least a few seconds before this information gets to the consumers. Also, due to the various layers of caching in the DNS infrastructure, the exact propagation delay is often non-deterministic.

3.  If a service is identified just by name, there’s no way to determine which boxes get traffic. We will get the equivalent of random routing, with loads chaotically piling up behind some backends while others are left idle.

4.  Most services use a front-end reverse proxy like nginx(to handle more connections, load balancing, ssl offloading, static file caching etc). These proxies cache the dns configuration unless you use some configuration file hack resulting in issues with detecting state. The same is true of HAProxy.

Note: By using a tool like Serf to handle the dynamic node additions/failures, the DNS solution can be made more robust. Serf is a tool for cluster membership, failure detection, and orchestration that is decentralized, fault-tolerant and highly available. Some of the uses of Serf are below

1. Maintain the list of web servers for a load balancer and notify that load balancer whenever a node comes online or goes offline.

2. Detect failed nodes within seconds, notifies the rest of the cluster, and executes handler scripts allowing you to handle these events.

3. Broadcast custom events and queries to a cluster. These can be used to trigger deploys, propagate configuration, etc.

2. Using In-App Service Discovery through a persistent store (database or filesytem) to store the configuration

The In-App solution for service discovery can be seen in Figure1. The configuration of all the services is stored in a config DB which is loaded into custom data structures in memory by all services at start-up time.
This configuration information is used by custom software load balancers (developed in-house) which use round-robin load balancing to distribute load across dependent mid-tier services.
It should be clear that there is a need for figuring out servers added/removed/replaced dynamically and reload the config in all servers to avoid service interruption with this approach. A pinger mechanism needs to be embedded into each application to find out the health of the services and trigger reload whenever a change is detected.
Currently the pinger logic is separate and not part of the application; so a manual triggering of the config reload is necessary.
Figure1 : In-App service discovery and load balancing
config_lb

Advantages
1. Easy to implement.

Disadvantages
1. Things get more complicated as we start deploying more services.
2. With a live system, service locations can change quite frequently due to auto or manual scaling which needs a reload of the config in all servers.
3. Config has to be reloaded on new deployments of services, as well as hosts failing or being added/replaced.
4. Health check of services is separated out from the application. So need to trigger config reload manually

3. Using a distributed consistent datastore (CP model in CAP theorem) – ZOOKEEPER

Apache ZooKeeper is an open source distributed coordination service that’s popular for use cases like dynamic configuration management and distributed locking. It can also be used for service discovery.

Service Registration is implemented with ephemeral nodes under a namespace. Ephemeral nodes only exist while the client is connected so typically a backend service registers itself, after startup, with it’s location information. If it fails or disconnects, the node disappears from the tree.

Service Discovery is implemented by listing and watching the namespace for the service. Clients receive all the currently registered services as well as notifications when a service becomes unavailable or new ones register. Clients also need to handle any load balancing or failover themselves.

The solution to dynamically handle configuration changes can be seen in Figure2. Even though it helps solve the problem, there are a few problems which makes the solution inefficient as show in Figure3.

Figure 2: Dynamic service discovery

config_zookeeper_1

The diagram on the left represents how ZooKeeper is being used for dynamic service discovery; Applications consume service information and dynamic configuration from ZooKeeper connected to it directly. They cache data in memory but otherwise rely on ZooKeeper as the source of truth for the data. ZooKeeper outages directly impact the applications.
The diagram on the right side shows how we can tweak the design to take advantage of the conveniences that ZooKeeper provides while tolerate its unavailability for various reasons. The application is completely isolated from ZooKeeper. Instead, a daemon process running on each machine connects to ZooKeeper, establishes watches on the data it is configured to monitor, and whenever the data changes, downloads it into a file on local disk at a well known location. The application itself only consumes the data from the local file and reloads when it changes. It doesn’t need to care that ZooKeeper is involved in propagating updates. With the new design applications are decoupled from zookeeper and the solution becomes more robust and fault tolerant

Figure 3 : Dynamic service discovery improvement
config_zookeeper

Advantages
1. Can handle dynamic configuration changes.

Disadvantages
1. Can be tricky and complicated to implement the solution correctly.
2. The zookeeper cluster becomes a SPoF. The solution has to handle outages/failures of the zookeeper cluster for various reasons like protocol bugs, network partitions, cluster unavailability due to too many connections from clients etc
3. Recovery might sometimes take a few hours due to a combination of the thundering herd problem and having to restart from a clean slate and recover from backup.
4. Works well if a unified software stack is used across services. Otherwise we might have to implement the solution in different languages
5. Not possible to use this solution if you would like to run apps which you did not write yourself but still have them consume your infrastructure: Nginx, a CI server, rsyslog or other systems software.

4. Using curator zookeeper recipes
The Zookeeper API can be difficult to use properly. We can use existing recipes like the Curator Service Discovery Extension to simplify the solution. We would still have to deal with zookeeper outage under certain circumstances; however with the fallback to persistent datastore approach we will be able to handle any outages to the zookeeper cluster efficiently.

5. Using open source software specifically tailored for service registration and discovery
Airbnb’s SmartStack: is a combination of two custom tools, Nerve and Synapse that leverage haproxy and Zookeeper to handle service registration and discovery. Since SmartStack relies on Zookeeper, the problems that plague zookeeper as mentioned earlier would affect this setup.

Netflix’s Eureka:
Eureka is Netflix’s middle-tier, load balancing and discovery service. The Eureka server is the registry for services. The servers replicate their state to each other through an asynchronous model which means each instance may have a slightly, different picture of all the services at any given time.
Service registration is handled by the client component. Services embed the client in their application code. At runtime, the client registers the service and periodically sends heartbeats to renew it’s leases.
Service discovery is handled by the smart-client as well. It retrieves the current registrations from the server and caches them locally. The client periodically refreshes it’s state and also handles load balancing and failovers.
Eureka was designed to be very resilient during failures. It favors availabilty over strong consistency (AP model) and can operate under a number of different failure modes.
Figure 4 : Using Eureka to handle service discovery and load balancing
config_lb (1)

Advantages
1. Much simpler and less complicated to implement than zookeeper.
2. Load balancing, fault tolerance and service discovery are handled by the eureka client implicitly.

Consul.io

Consul has multiple components, but as a whole, it is a tool for discovering and configuring services in your infrastructure. It provides several key features:

  • Service Discovery: Clients of Consul can provide a service, such as apior mysql, and other clients can use Consul to discover providers of a given service. Using either DNS or HTTP, applications can easily find the services they depend upon.
  • Health Checking: Consul clients can provide any number of health checks, either associated with a given service (“is the webserver returning 200 OK”), or with the local node (“is memory utilization below 90%”). This information can be used by an operator to monitor cluster health, and it is used by the service discovery components to route traffic away from unhealthy hosts.
  • Key/Value Store: Applications can make use of Consul’s hierarchical key/value store for any number of purposes including: dynamic configuration, feature flagging, coordination, leader election, etc. The simple HTTP API makes it easy to use.
  • Multi Datacenter: Consul supports multiple datacenters out of the box. This means users of Consul do not have to worry about building additional layers of abstraction to grow to multiple regions.

Consul is designed to be friendly to both the DevOps community and application developers, making it perfect for modern, elastic infrastructures.

Etcd

Etcd is a highly-available, key-value store for shared configuration and service discovery. Etcd was inspired by Zookeeper and Doozer. It’s written in Go, uses Raft for consensus and has a HTTP+JSON based API.

SkyDNS

SkyDNS is a relatively new project that is written in Go, uses RAFT for consensus and also provides a client API over HTTP and DNS. It has some similarities to Etcd and Spotify’s DNS model and actually uses the same RAFT implementation as Etcd, go-raft.

6. Using a hardware load balancer (Works if we are maintaining our own infrastructure)
A hardware/network load balancer can be used to split network load across multiple servers. The basic principle is that network traffic is sent to a shared IP in many cases called a virtual IP (VIP), or listening IP. This VIP is an address that it attached to the load balancer. Once the load balancer receives a request on this VIP it will need to make a decision on where to send it.
This decision is normally controlled by a “load balancing method/ strategy”, a “Server health check” or, in the case of a next generation device in addition a rule set.
The request is then sent to the appropriate server.
There are a lot of feature rich hardware load balancers like jetNEXUS, F5, NetScaler etc
Advantages
1. SSL acceleration and offload can be delegated to the hardware LB increasing the webserver performance
2. No effort required to develop LB software and maintain
Disadvantages
1. Cost of hardware and support might become a factor.
2. Cannot use a hardware LB if we are deploying our services in the cloud.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s