Traffic Management with Istio (4): DNS Resolution with CoreDNS

Join us at the Alibaba Cloud ACtivate Online Conference on March 5–6 to challenge assumptions, exchange ideas, and explore what is possible through digital transformation.

CoreDNS and Its Plugin Extension

CoreDNS is an incubation-level project under the CNCF umbrella that was formerly known as SkyDNS. Its main purpose is to build a fast and flexible DNS server that allows users to access and use DNS data in different ways. Based on the Caddy server framework, CoreDNS implements a plugin chain architecture that abstracts large volumes of logic into the form of a plugin, which it then exposes to its users. Each plugin performs DNS functions, such as DNS service discovery of Kubernetes, Prometheus monitoring, and so on.

In addition to pluginization, CoreDNS also has the following features:

  1. A simplified configuration: The introduction of a more expressive DSL, namely a configuration file in the form of Corefile (also developed based on the Caddy framework).
  2. Integrated solutions: Unlike kube-dns, CoreDNS is compiled as a single binary executable file with built-in cache, backend storage, health checks and other functions. No third-party components are required to assist in the implementation of other functions, thus making deployment more convenient and memory management safer.

Introduction to Corefile

Corefile is the configuration file of CoreDNS (the configuration file Caddyfile originated from the Caddy framework). It defines the following content:

  1. The protocol that decides the port on which the server listens (multiple servers can be defined to listen to different ports at the same time).
  2. The authoritative DNS resolution zone for which the server is responsible.
  3. The plugins to be loaded by the server.

A typical Corefile format is displayed below:

The parameters are described as follows:

  1. Zone: Defines the zone for which the server is responsible.
  2. Port: The listening port, which is optional, defaults to 53.
  3. Plugin: Defines the plugins to be loaded by the server. Each plugin can have multiple parameters.

The Working Mechanism of Plugins

When CoreDNS starts up, it then starts different servers according to the configuration file. Each server has its own plugin chain. When there is a DNS request, it proceeds through the following 3-step logic in turn:

  1. If the currently requested server has multiple zones, the greedy principle is adopted to select the zone that best matches.
  2. Once a matching server is found, plugins on the plugin chain are executed in the order defined by plugin.cfg.
  3. Each plugin determines whether the current request should be processed.

There are several possibilities:

  1. The request is processed by the current plugin.
    The plugin generates the corresponding response and returns it to the client, whereupon the request ends. This means that the next plugin (for example, whoami) is not called.
  2. The request is not processed by the current plugin.
    The next plugin is directly called. If there is an error executing the final plugin, the server returns a SERVFAIL response.
  3. The request is processed by the current plugin in the form of Fallthrough.

During the processing of the plugin, if there is a possibility that the request might jump to the next plugin, then this process is called Fallthrough, and the keyword ‘Fallthrough’ is used to decide whether to allow this operation. For example, for the host plugin, when the query domain name is not located in /etc/hosts, the next plugin is called.

During this process, the request contains hints.

The request is processed by the plugin and continues to be processed by the next plugin after some information (hints) is added to its response. This additional information forms the final response given to the client, for example, the metric plugin.

CoreDNS and Kubernetes

Starting from Kubernetes 1.11, CoreDNS reached GA status as a DNS plugin for Kubernetes. Kubernetes recommends using CoreDNS as the DNS service within the cluster. For Alibaba Cloud Container Service for Kubernetes 1.11.5 and later, the default installation uses CoreDNS as the DNS service. Configuration information for CoreDNS can be viewed using the following command:

The meaning of each parameter in the configuration file is as follows:

NameDescriptionerrorsErrors are logged to the standard output.healthThe health status can be viewed at http://localhost:8080/healthkubernetesRespond to DNS query requests based on the IP of the service. The cluster domain defaults to cluster.local.prometheusMonitoring data in prometheus format can be obtained through http://localhost:9153/metricsproxyIf it cannot be resolved locally, the upper address is queried. The /etc/resolv.conf configuration of the host is used by default.cacheCache time

Easy Deployment of Istio CoreDNS through Application Directories

Alibaba Cloud Container Service for Kubernetes 1.11.5 is now available. Using its management console, you can quickly and very easily create a Kubernetes cluster. See Creating a Kubernetes Cluster for more information.

Click on the left, select on the right, and click Parameters in the page opened. You can change the to customize the settings (see below):

The parameters are described as follows:

NameDescriptionValueDefaultreplicaCountSpecify the number of replicasnumber1coreDNSImageSpecifies the CoreDNS image namevalid image tagcoredns/coredns:1.2.2coreDNSPluginImageSpecifies the plugin image namevalid image

After modification, select the corresponding cluster, namespace , as well as the release name on the right, then click Deploy. After a few seconds, an Istio CoreDNS release can be created, as shown in the following figure:

Change the Cluster CoreDNS Configuration

The cluster IP of the service can be obtained by executing the following command:

Update the configuration item configmap in the cluster’s CoreDNS. Set the istiocoredns service as the upstream DNS service of the .global domain, then add the domain .global to the configuration item configmap, as follows:

After changing the configuration item, the cluster’s CoreDNS container reloads the configuration content. The load log can be viewed using the following command line:

Create ServiceEntry to Verify DNS Resolution

With ServiceEntry, additional entries can be added to the service registry within Istio, allowing services automatically discovered in the grid to be accessed and routed to these manually added services. ServiceEntry describes the attributes of the service, including DNS names, virtual IPs, ports, protocols, and endpoints. This kind of service may be an API outside the grid, or an entry in the service registry within the grid but not in the platform, such as a group of VM-based services that need to communicate with the Kubernetes service.

In the following example, wildcard characters are used to define hosts and the address is specified, as follows:

Execute the following command to view the container . You can see that the domain name mapping relationship of the ServiceEntry above has been loaded:

To create a test container using the image tutum/dnsutils:

After entering the container command line, execute dig to view the corresponding domain name resolution:


Istio supports several different topologies for distributing application services outside a single cluster. For example, services in a service grid can use ServiceEntry to access independent external services or services exposed by another service grid (this is commonly referred to as grid federation). Using Istio CoreDNS to provide DNS resolution for services in a remote cluster obviates the need to modify existing applications, allowing users to access services in the remote cluster as if accessing services in this cluster.