Learn more about Kubernetes CRD development tool kubebuilder

Original text connection: https://blog.csdn.net/u012986012/article/details/120271091 ...
General development process
Operator mode
summary
quote

Original text connection: https://blog.csdn.net/u012986012/article/details/120271091

General development process

How do we implement Operator without any Operator scaffolding? It can be divided into the following steps:

  • CRD definition
  • Controller development, writing logic
  • Test deployment

API definition

First pass k8s.io/code-generator The project generates API related codes and defines related fields.

Controller implementation

Implement the Controller in the official sample-controller As an example, as shown in the figure
[external chain picture transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-nmh4xgxn-1631524205161)( https://github.com/kubernetes/sample-controller/raw/master/docs/images/client-go-controller-interaction.jpeg )]

It is mainly divided into the following steps:

Initialize client configuration

//Create client config through master/kubeconfig cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig) if err != nil { klog.Fatalf("Error building kubeconfig: %s", err.Error()) } // kubernetes client kubeClient, err := kubernetes.NewForConfig(cfg) if err != nil { klog.Fatalf("Error building kubernetes clientset: %s", err.Error()) } // crd client exampleClient, err := clientset.NewForConfig(cfg) if err != nil { klog.Fatalf("Error building example clientset: %s", err.Error()) }

Initialize Informer and start

//k8s sharedInformer kubeInformerFactory := kubeinformers.NewSharedInformerFactory(kubeClient, time.Second*30) // crd sharedInformer exampleInformerFactory := informers.NewSharedInformerFactory(exampleClient, time.Second*30)

//Initialize the controller, pass in the informer, and register the Deployment and Foo Informers
controller := NewController(kubeClient, exampleClient,
kubeInformerFactory.Apps().V1().Deployments(),
exampleInformerFactory.Samplecontroller().V1alpha1().Foos())
//Start Informer
kubeInformerFactory.Start(stopCh)
exampleInformerFactory.Start(stopCh)

Finally, start the Controller

if err = controller.Run(2, stopCh); err != nil { klog.Fatalf("Error running controller: %s", err.Error()) }

In the implementation of Controller, initialize through NewController:

func NewController( kubeclientset kubernetes.Interface, sampleclientset clientset.Interface, deploymentInformer appsinformers.DeploymentInformer, fooInformer informers.FooInformer) *Controller { // Create event broadcaster utilruntime.Must(samplescheme.AddToScheme(scheme.Scheme)) klog.V(4).Info("Creating event broadcaster") eventBroadcaster := record.NewBroadcaster() eventBroadcaster.StartStructuredLogging(0) eventBroadcaster.StartRecordingToSink(&typedcorev1.EventSinkImpl) recorder := eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource) controller := &Controller{ kubeclientset: kubeclientset, sampleclientset: sampleclientset, deploymentsLister: deploymentInformer.Lister(), //Read only cache deploymentsSynced: deploymentInformer.Informer().HasSynced, //Calling Informer() will register informer into the shared informer foosLister: fooInformer.Lister(), foosSynced: fooInformer.Informer().HasSynced, workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "Foos"), // Initialize work queue recorder: recorder, } klog.Info("Setting up event handlers") // Add callback event fooInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ AddFunc: controller.enqueueFoo, UpdateFunc: func(old, new interface{}) { controller.enqueueFoo(new) }, }) deploymentInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ AddFunc: controller.handleObject, UpdateFunc: func(old, new interface{}) { newDepl := new.(*appsv1.Deployment) oldDepl := old.(*appsv1.Deployment) if newDepl.ResourceVersion == oldDepl.ResourceVersion { // Periodic resync will send update events for all known Deployments. // Two different versions of the same Deployment will always have different RVs. return } controller.handleObject(new) }, DeleteFunc: controller.handleObject, }) return controller }

Controller startup is a typical k8s workflow, which continuously obtains objects from the work queue for processing through the control loop to make them reach the desired state

func (c *Controller) Run(workers int, stopCh <-chan struct{}) error { defer utilruntime.HandleCrash() defer c.workqueue.ShutDown() // Waiting for cache synchronization klog.Info("Waiting for informer caches to sync") if ok := cache.WaitForCacheSync(stopCh, c.deploymentsSynced, c.foosSynced); !ok { return fmt.Errorf("failed to wait for caches to sync") } // Start the workers, and each worker has a goroutine for i := 0; i < workers; i++ { go wait.Until(c.runWorker, time.Second, stopCh) } // Wait for exit signal <-stopCh return nil } // The worker is a loop that continuously calls processNextWorkItem func (c *Controller) runWorker() { for c.processNextWorkItem() { } } func (c *Controller) processNextWorkItem() bool { // Get object from work queue obj, shutdown := c.workqueue.Get() if shutdown { return false } // We wrap this block in a func so we can defer c.workqueue.Done. err := func(obj interface{}) error { defer c.workqueue.Done(obj) var key string var ok bool if key, ok = obj.(string); !ok { c.workqueue.Forget(obj) utilruntime.HandleError(fmt.Errorf("expected string in workqueue but got %#v", obj)) return nil } // Processing, core logic if err := c.syncHandler(key); err != nil { // Processing failed. Join the queue again c.workqueue.AddRateLimited(key) return fmt.Errorf("error syncing '%s': %s, requeuing", key, err.Error()) } // Failed to join the team c.workqueue.Forget(obj) klog.Infof("Successfully synced '%s'", key) return nil }(obj) if err != nil { utilruntime.HandleError(err) return true } return true }

Operator mode

In the Operator mode, the user only needs to implement the Reconcile, that is, the syncHandler in the sample controller. The kubebuilder has already implemented other steps for us. Let's explore how kubebuilder triggers Reconcile logic step by step.

with mygame For example, the main files generated by kubebuilder are as follows:

var ( // Used to parse kubernetes objects scheme = runtime.NewScheme() setupLog = ctrl.Log.WithName("setup") ) func init() { utilruntime.Must(clientgoscheme.AddToScheme(scheme)) // Add custom object to scheme utilruntime.Must(myappv1.AddToScheme(scheme)) //+kubebuilder:scaffold:scheme } func main() { // ... ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts))) // Initialize controller manager mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ Scheme: scheme, MetricsBindAddress: metricsAddr, Port: 9443, HealthProbeBindAddress: probeAddr, LeaderElection: enableLeaderElection, LeaderElectionID: "7bc453ad.qingwave.github.io", }) if err != nil { setupLog.Error(err, "unable to start manager") os.Exit(1) } // Initialize Reconciler if err = (&controllers.GameReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, "unable to create controller", "controller", "Game") os.Exit(1) } // Initialize Webhook if enableWebhook { if err = (&myappv1.Game{}).SetupWebhookWithManager(mgr); err != nil { setupLog.Error(err, "unable to create webhook", "webhook", "Game") os.Exit(1) } } //+kubebuilder:scaffold:builder // Start manager if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { setupLog.Error(err, "problem running manager") os.Exit(1) } }

kubebuilder encapsulates the controller runtime, mainly initializes the controller manager, Reconciler and Webhook in the main file, and finally starts the manager.

Look at each process separately.

Manager initialization

The code is as follows:

func New(config *rest.Config, options Options) (Manager, error) { // Set default configuration options = setOptionsDefaults(options) // cluster initialization cluster, err := cluster.New(config, func(clusterOptions *cluster.Options) { clusterOptions.Scheme = options.Scheme clusterOptions.MapperProvider = options.MapperProvider clusterOptions.Logger = options.Logger clusterOptions.SyncPeriod = options.SyncPeriod clusterOptions.Namespace = options.Namespace clusterOptions.NewCache = options.NewCache clusterOptions.ClientBuilder = options.ClientBuilder clusterOptions.ClientDisableCacheFor = options.ClientDisableCacheFor clusterOptions.DryRunClient = options.DryRunClient clusterOptions.EventBroadcaster = options.EventBroadcaster }) if err != nil { return nil, err } // event recorder initialization recorderProvider, err := options.newRecorderProvider(config, cluster.GetScheme(), options.Logger.WithName("events"), options.makeBroadcaster) if err != nil { return nil, err } // Resource lock configuration of the selected master leaderConfig := options.LeaderElectionConfig if leaderConfig == nil { leaderConfig = rest.CopyConfig(config) } resourceLock, err := options.newResourceLock(leaderConfig, recorderProvider, leaderelection.Options{ LeaderElection: options.LeaderElection, LeaderElectionResourceLock: options.LeaderElectionResourceLock, LeaderElectionID: options.LeaderElectionID, LeaderElectionNamespace: options.LeaderElectionNamespace, }) if err != nil { return nil, err } // ... return &controllerManager{ cluster: cluster, recorderProvider: recorderProvider, resourceLock: resourceLock, metricsListener: metricsListener, metricsExtraHandlers: metricsExtraHandlers, logger: options.Logger, elected: make(chan struct{}), port: options.Port, host: options.Host, certDir: options.CertDir, leaseDuration: *options.LeaseDuration, renewDeadline: *options.RenewDeadline, retryPeriod: *options.RetryPeriod, healthProbeListener: healthProbeListener, readinessEndpointName: options.ReadinessEndpointName, livenessEndpointName: options.LivenessEndpointName, gracefulShutdownTimeout: *options.GracefulShutdownTimeout, internalProceduresStop: make(chan struct{}), leaderElectionStopped: make(chan struct{}), }, nil

In New, it mainly initializes various configuration ports, master selection information, eventRecorder and, most importantly, cluster. Cluster is used to access k8s, and the initialization code is as follows:

// New constructs a brand new cluster func New(config *rest.Config, opts ...Option) (Cluster, error) { if config == nil { return nil, errors.New("must specify Config") } options := Options{} for _, opt := range opts { opt(&options) } options = setOptionsDefaults(options) // Create the mapper provider mapper, err := options.MapperProvider(config) if err != nil { options.Logger.Error(err, "Failed to get API Group-Resources") return nil, err } // Create the cache for the cached read client and registering informers cache, err := options.NewCache(config, cache.Options) if err != nil { return nil, err } clientOptions := client.Options apiReader, err := client.New(config, clientOptions) if err != nil { return nil, err } writeObj, err := options.ClientBuilder. WithUncached(options.ClientDisableCacheFor...). Build(cache, config, clientOptions) if err != nil { return nil, err } if options.DryRunClient { writeObj = client.NewDryRunClient(writeObj) } recorderProvider, err := options.newRecorderProvider(config, options.Scheme, options.Logger.WithName("events"), options.makeBroadcaster) if err != nil { return nil, err } return &cluster{ config: config, scheme: options.Scheme, cache: cache, fieldIndexes: cache, client: writeObj, apiReader: apiReader, recorderProvider: recorderProvider, mapper: mapper, logger: options.Logger, }, nil }

cache and read / write client are created here

Cache initialization

Create cache Code:

// New initializes and returns a new Cache. func New(config *rest.Config, opts Options) (Cache, error) { opts, err := defaultOpts(config, opts) if err != nil { return nil, err } im := internal.NewInformersMap(config, opts.Scheme, opts.Mapper, *opts.Resync, opts.Namespace) return &informerCache, nil }

In New, NewInformersMap is called to create infermer map, which is divided into structured, unstructured and metadata.

func NewInformersMap(config *rest.Config, scheme *runtime.Scheme, mapper meta.RESTMapper, resync time.Duration, namespace string) *InformersMap { return &InformersMap{ structured: newStructuredInformersMap(config, scheme, mapper, resync, namespace), unstructured: newUnstructuredInformersMap(config, scheme, mapper, resync, namespace), metadata: newMetadataInformersMap(config, scheme, mapper, resync, namespace), Scheme: scheme, } }

In the end, newspecificintermersmap is called

// newStructuredInformersMap creates a new InformersMap for structured objects. func newStructuredInformersMap(config *rest.Config, scheme *runtime.Scheme, mapper meta.RESTMapper, resync time.Duration, namespace string) *specificInformersMap { return newSpecificInformersMap(config, scheme, mapper, resync, namespace, createStructuredListWatch) } func newSpecificInformersMap(config *rest.Config, scheme *runtime.Scheme, mapper meta.RESTMapper, resync time.Duration, namespace string, createListWatcher createListWatcherFunc) *specificInformersMap { ip := &specificInformersMap{ config: config, Scheme: scheme, mapper: mapper, informersByGVK: make(map[schema.GroupVersionKind]*MapEntry), codecs: serializer.NewCodecFactory(scheme), paramCodec: runtime.NewParameterCodec(scheme), resync: resync, startWait: make(chan struct{}), createListWatcher: createListWatcher, namespace: namespace, } return ip } func createStructuredListWatch(gvk schema.GroupVersionKind, ip *specificInformersMap) (*cache.ListWatch, error) { // Kubernetes APIs work against Resources, not GroupVersionKinds. Map the // groupVersionKind to the Resource API we will use. mapping, err := ip.mapper.RESTMapping(gvk.GroupKind(), gvk.Version) if err != nil { return nil, err } client, err := apiutil.RESTClientForGVK(gvk, false, ip.config, ip.codecs) if err != nil { return nil, err } listGVK := gvk.GroupVersion().WithKind(gvk.Kind + "List") listObj, err := ip.Scheme.New(listGVK) if err != nil { return nil, err } // TODO: the functions that make use of this ListWatch should be adapted to // pass in their own contexts instead of relying on this fixed one here. ctx := context.TODO() // Create a new ListWatch for the obj return &cache.ListWatch{ ListFunc: func(opts metav1.ListOptions) (runtime.Object, error) { res := listObj.DeepCopyObject() isNamespaceScoped := ip.namespace != "" && mapping.Scope.Name() != meta.RESTScopeNameRoot err := client.Get().NamespaceIfScoped(ip.namespace, isNamespaceScoped).Resource(mapping.Resource.Resource).VersionedParams(&opts, ip.paramCodec).Do(ctx).Into(res) return res, err }, // Setup the watch function WatchFunc: func(opts metav1.ListOptions) (watch.Interface, error) { // Watch needs to be set to true separately opts.Watch = true isNamespaceScoped := ip.namespace != "" && mapping.Scope.Name() != meta.RESTScopeNameRoot return client.Get().NamespaceIfScoped(ip.namespace, isNamespaceScoped).Resource(mapping.Resource.Resource).VersionedParams(&opts, ip.paramCodec).Watch(ctx) }, }, nil }

In newSpecificInformersMap, use informersByGVK to record the correspondence between each GVK object in the schema and the informer. When using, you can get the informer according to GVK and then go to List/Get.

Use createlistwatch in newSpecificInformersMap to initialize the ListWatch object.

Client initialization

There are many types of client s here. apiReader directly reads objects from apiserver, and writeObj can read data from apiserver or cache.

apiReader, err := client.New(config, clientOptions) if err != nil { return nil, err } func New(config *rest.Config, options Options) (Client, error) { if config == nil { return nil, fmt.Errorf("must provide non-nil rest.Config to client.New") } // Init a scheme if none provided if options.Scheme == nil { options.Scheme = scheme.Scheme } // Init a Mapper if none provided if options.Mapper == nil { var err error options.Mapper, err = apiutil.NewDynamicRESTMapper(config) if err != nil { return nil, err } } // Read from cache clientcache := &clientCache{ config: config, scheme: options.Scheme, mapper: options.Mapper, codecs: serializer.NewCodecFactory(options.Scheme), structuredResourceByType: make(map[schema.GroupVersionKind]*resourceMeta), unstructuredResourceByType: make(map[schema.GroupVersionKind]*resourceMeta), } rawMetaClient, err := metadata.NewForConfig(config) if err != nil { return nil, fmt.Errorf("unable to construct metadata-only client for use as part of client: %w", err) } c := &client{ typedClient: typedClient{ cache: clientcache, paramCodec: runtime.NewParameterCodec(options.Scheme), }, unstructuredClient: unstructuredClient{ cache: clientcache, paramCodec: noConversionParamCodec{}, }, metadataClient: metadataClient{ client: rawMetaClient, restMapper: options.Mapper, }, scheme: options.Scheme, mapper: options.Mapper, } return c, nil }

writeObj implements a read-write separated Client. Write is directly connected to the apiserver. If the read access is in the cache, the cache is directly read. Otherwise, it is through the clientset.

writeObj, err := options.ClientBuilder. WithUncached(options.ClientDisableCacheFor...). Build(cache, config, clientOptions) if err != nil { return nil, err } func (n *newClientBuilder) Build(cache cache.Cache, config *rest.Config, options client.Options) (client.Client, error) { // Create the Client for Write operations. c, err := client.New(config, options) if err != nil { return nil, err } return client.NewDelegatingClient(client.NewDelegatingClientInput{ CacheReader: cache, Client: c, UncachedObjects: n.uncached, }) } // Read write separation client func NewDelegatingClient(in NewDelegatingClientInput) (Client, error) { uncachedGVKs := map[schema.GroupVersionKind]struct{}{} for _, obj := range in.UncachedObjects { gvk, err := apiutil.GVKForObject(obj, in.Client.Scheme()) if err != nil { return nil, err } uncachedGVKs[gvk] = struct{}{} } return &delegatingClient{ scheme: in.Client.Scheme(), mapper: in.Client.RESTMapper(), Reader: &delegatingReader{ CacheReader: in.CacheReader, ClientReader: in.Client, scheme: in.Client.Scheme(), uncachedGVKs: uncachedGVKs, cacheUnstructured: in.CacheUnstructured, }, Writer: in.Client, StatusClient: in.Client, }, nil } // Get retrieves an obj for a given object key from the Kubernetes Cluster. func (d *delegatingReader) Get(ctx context.Context, key ObjectKey, obj Object) error { //Select the client according to whether it is cached or not if isUncached, err := d.shouldBypassCache(obj); err != nil { return err } else if isUncached { return d.ClientReader.Get(ctx, key, obj) } return d.CacheReader.Get(ctx, key, obj) }

Controller initialization

The Controller initialization code is as follows:

func (r *GameReconciler) SetupWithManager(mgr ctrl.Manager) error { ctrl.NewControllerManagedBy(mgr). WithOptions(controller.Options{ MaxConcurrentReconciles: 3, }). For(&myappv1.Game{}). // Reconcile resource Owns(&appsv1.Deployment{}). // The listening Owner is the Deployment of the current resource Complete(r) return nil } // Complete builds the Application ControllerManagedBy. func (blder *Builder) Complete(r reconcile.Reconciler) error { _, err := blder.Build(r) return err } // Build builds the Application ControllerManagedBy and returns the Controller it created. func (blder *Builder) Build(r reconcile.Reconciler) (controller.Controller, error) { if r == nil { return nil, fmt.Errorf("must provide a non-nil Reconciler") } if blder.mgr == nil { return nil, fmt.Errorf("must provide a non-nil Manager") } if blder.forInput.err != nil { return nil, blder.forInput.err } // Checking the reconcile type exist or not if blder.forInput.object == nil { return nil, fmt.Errorf("must provide an object for reconciliation") } // Set the Config blder.loadRestConfig() // Set the ControllerManagedBy if err := blder.doController(r); err != nil { return nil, err } // Set the Watch if err := blder.doWatch(); err != nil { return nil, err } return blder.ctrl, nil }

Initialize the Controller, call ctrl.NewControllerManagedBy to create the Builder, fill in the configuration, and finally complete the initialization through the Build method, mainly doing three things

  1. set configuration
  2. doController to create controller
  3. doWatch to set the resources to be monitored

Let's first look at controller initialization

func (blder *Builder) doController(r reconcile.Reconciler) error { ctrlOptions := blder.ctrlOptions if ctrlOptions.Reconciler == nil { ctrlOptions.Reconciler = r } gvk, err := getGvk(blder.forInput.object, blder.mgr.GetScheme()) if err != nil { return err } // Setup the logger. if ctrlOptions.Log == nil { ctrlOptions.Log = blder.mgr.GetLogger() } ctrlOptions.Log = ctrlOptions.Log.WithValues("reconciler group", gvk.Group, "reconciler kind", gvk.Kind) // Build the controller and return. blder.ctrl, err = newController(blder.getControllerName(gvk), blder.mgr, ctrlOptions) return err } func New(name string, mgr manager.Manager, options Options) (Controller, error) { c, err := NewUnmanaged(name, mgr, options) if err != nil { return nil, err } // Add the controller as a Manager components return c, mgr.Add(c) } func NewUnmanaged(name string, mgr manager.Manager, options Options) (Controller, error) { if options.Reconciler == nil { return nil, fmt.Errorf("must specify Reconciler") } if len(name) == 0 { return nil, fmt.Errorf("must specify Name for Controller") } if options.Log == nil { options.Log = mgr.GetLogger() } if options.MaxConcurrentReconciles <= 0 { options.MaxConcurrentReconciles = 1 } if options.CacheSyncTimeout == 0 { options.CacheSyncTimeout = 2 * time.Minute } if options.RateLimiter == nil { options.RateLimiter = workqueue.DefaultControllerRateLimiter() } // Inject dependencies into Reconciler if err := mgr.SetFields(options.Reconciler); err != nil { return nil, err } // Create controller with dependencies set return &controller.Controller{ Do: options.Reconciler, MakeQueue: func() workqueue.RateLimitingInterface { return workqueue.NewNamedRateLimitingQueue(options.RateLimiter, name) }, MaxConcurrentReconciles: options.MaxConcurrentReconciles, CacheSyncTimeout: options.CacheSyncTimeout, SetFields: mgr.SetFields, Name: name, Log: options.Log.WithName("controller").WithName(name), }, nil }

doController calls controller.New to create a controller and add it to the manager. We can see the familiar configuration in NewUnmanaged. Similar to the sample controller above, work queues and maximum number of workers are also set here.

The doWatch code is as follows

func (blder *Builder) doWatch() error { // Reconcile type typeForSrc, err := blder.project(blder.forInput.object, blder.forInput.objectProjection) if err != nil { return err } src := &source.Kind hdler := &handler.EnqueueRequestForObject{} allPredicates := append(blder.globalPredicates, blder.forInput.predicates...) if err := blder.ctrl.Watch(src, hdler, allPredicates...); err != nil { return err } // Watches the managed types for _, own := range blder.ownsInput { typeForSrc, err := blder.project(own.object, own.objectProjection) if err != nil { return err } src := &source.Kind hdler := &handler.EnqueueRequestForOwner{ OwnerType: blder.forInput.object, IsController: true, } allPredicates := append([]predicate.Predicate(nil), blder.globalPredicates...) allPredicates = append(allPredicates, own.predicates...) if err := blder.ctrl.Watch(src, hdler, allPredicates...); err != nil { return err } } // Do the watch requests for _, w := range blder.watchesInput { allPredicates := append([]predicate.Predicate(nil), blder.globalPredicates...) allPredicates = append(allPredicates, w.predicates...) // If the source of this watch is of type *source.Kind, project it. if srckind, ok := w.src.(*source.Kind); ok { typeForSrc, err := blder.project(srckind.Type, w.objectProjection) if err != nil { return err } srckind.Type = typeForSrc } if err := blder.ctrl.Watch(w.src, w.eventhandler, allPredicates...); err != nil { return err } } return nil }

doWatch uses this watch's current resources, ownsInput resources (i.e. owner for current resources), and watchsInput introduced through builder, and finally calls ctrl.Watch to register. The parameter eventhandler is the queue function. For example, the current resource queue is implemented as handler.enqueuerequestforeobject. Similarly, handler.enqueuerequestforeowner adds the owner to the work queue.

type EnqueueRequestForObject struct{} // Create implements EventHandler func (e *EnqueueRequestForObject) Create(evt event.CreateEvent, q workqueue.RateLimitingInterface) { if evt.Object == nil { enqueueLog.Error(nil, "CreateEvent received with no metadata", "event", evt) return } // Join queue q.Add(reconcile.Request}) }

The Watch is implemented as follows:

func (c *Controller) Watch(src source.Source, evthdler handler.EventHandler, prct ...predicate.Predicate) error { c.mu.Lock() defer c.mu.Unlock() // Inject Cache into arguments if err := c.SetFields(src); err != nil { return err } if err := c.SetFields(evthdler); err != nil { return err } for _, pr := range prct { if err := c.SetFields(pr); err != nil { return err } } if !c.Started { c.startWatches = append(c.startWatches, watchDescription) return nil } c.Log.Info("Starting EventSource", "source", src) return src.Start(c.ctx, evthdler, c.Queue, prct...) } func (ks *Kind) InjectCache(c cache.Cache) error { if ks.cache == nil { ks.cache = c } return nil } func (ks *Kind) Start(ctx context.Context, handler handler.EventHandler, queue workqueue.RateLimitingInterface, prct ...predicate.Predicate) error { ... i, err := ks.cache.GetInformer(ctx, ks.Type) if err != nil { if kindMatchErr, ok := err.(*meta.NoKindMatchError); ok { log.Error(err, "if kind is a CRD, it should be installed before calling Start", "kind", kindMatchErr.GroupKind) } return err } i.AddEventHandler(internal.EventHandler) return nil } // informer get implementation func (m *InformersMap) Get(ctx context.Context, gvk schema.GroupVersionKind, obj runtime.Object) (bool, *MapEntry, error) { switch obj.(type) { case *unstructured.Unstructured: return m.unstructured.Get(ctx, gvk, obj) case *unstructured.UnstructuredList: return m.unstructured.Get(ctx, gvk, obj) case *metav1.PartialObjectMetadata: return m.metadata.Get(ctx, gvk, obj) case *metav1.PartialObjectMetadataList: return m.metadata.Get(ctx, gvk, obj) default: return m.structured.Get(ctx, gvk, obj) } } // If informer does not exist, create a new one and add it to informerMap func (ip *specificInformersMap) Get(ctx context.Context, gvk schema.GroupVersionKind, obj runtime.Object) (bool, *MapEntry, error) { // Return the informer if it is found i, started, ok := func() (*MapEntry, bool, bool) { ip.mu.RLock() defer ip.mu.RUnlock() i, ok := ip.informersByGVK[gvk] return i, ip.started, ok }() if !ok { var err error if i, started, err = ip.addInformerToMap(gvk, obj); err != nil { return started, nil, err } } ... return started, i, nil }

The Watch injects the cache through the setfeeds method, and finally adds it to the controller's startWatches queue. If it has been started, call the Start method to configure the callback function EventHandler.

Manager startup

Finally, let's look at the Manager startup process

func (cm *controllerManager) Start(ctx context.Context) (err error) { if err := cm.Add(cm.cluster); err != nil { return fmt.Errorf("failed to add cluster to runnables: %w", err) } cm.internalCtx, cm.internalCancel = context.WithCancel(ctx) stopComplete := make(chan struct{}) defer close(stopComplete) defer func() { stopErr := cm.engageStopProcedure(stopComplete) }() cm.errChan = make(chan error) if cm.metricsListener != nil { go cm.serveMetrics() } // Serve health probes if cm.healthProbeListener != nil { go cm.serveHealthProbes() } go cm.startNonLeaderElectionRunnables() go func() { if cm.resourceLock != nil { err := cm.startLeaderElection() if err != nil { cm.errChan <- err } } else { // Treat not having leader election enabled the same as being elected. cm.startLeaderElectionRunnables() close(cm.elected) } }() select { case <-ctx.Done(): // We are done return nil case err := <-cm.errChan: // Error starting or running a runnable return err } }

The main processes include:

  1. Start monitoring service
  2. Start health check service
  3. Start non selected primary service
  4. Start the selected master service

For non selected primary services, the codes are as follows

func (cm *controllerManager) startNonLeaderElectionRunnables() { cm.mu.Lock() defer cm.mu.Unlock() cm.waitForCache(cm.internalCtx) // Start the non-leaderelection Runnables after the cache has synced for _, c := range cm.nonLeaderElectionRunnables { cm.startRunnable(c) } } func (cm *controllerManager) waitForCache(ctx context.Context) { if cm.started { return } for _, cache := range cm.caches { cm.startRunnable(cache) } for _, cache := range cm.caches { cache.GetCache().WaitForCacheSync(ctx) } cm.started = true }

Start the cache and start other services. The same is true for the selected main service. When initializing the controller, it will be added to the selected main service queue, that is, finally start the controller

func (c *Controller) Start(ctx context.Context) error { ... c.Queue = c.MakeQueue() defer c.Queue.ShutDown() // needs to be outside the iife so that we shutdown after the stop channel is closed err := func() error { defer c.mu.Unlock() defer utilruntime.HandleCrash() for _, watch := range c.startWatches { c.Log.Info("Starting EventSource", "source", watch.src) if err := watch.src.Start(ctx, watch.handler, c.Queue, watch.predicates...); err != nil { return err } } for _, watch := range c.startWatches { syncingSource, ok := watch.src.(source.SyncingSource) if !ok { continue } if err := func() error { // use a context with timeout for launching sources and syncing caches. sourceStartCtx, cancel := context.WithTimeout(ctx, c.CacheSyncTimeout) defer cancel() if err := syncingSource.WaitForSync(sourceStartCtx); err != nil { err := fmt.Errorf("failed to wait for %s caches to sync: %w", c.Name, err) c.Log.Error(err, "Could not wait for Cache to sync") return err } return nil }(); err != nil { return err } } ... for i := 0; i < c.MaxConcurrentReconciles; i++ { go wait.UntilWithContext(ctx, func(ctx context.Context) { for c.processNextWorkItem(ctx) { } }, c.JitterPeriod) } c.Started = true return nil }() if err != nil { return err } <-ctx.Done() c.Log.Info("Stopping workers") return nil } func (c *Controller) processNextWorkItem(ctx context.Context) bool { obj, shutdown := c.Queue.Get() ... c.reconcileHandler(ctx, obj) return true } func (c *Controller) reconcileHandler(ctx context.Context, obj interface{}) { // Make sure that the the object is a valid request. req, ok := obj.(reconcile.Request) ... if result, err := c.Do.Reconcile(ctx, req); err != nil { ... }

Controller startup mainly includes

  1. Waiting for cache synchronization
  2. Start multiple processnextworkitems
  3. Each Worker calls c.Do.Reconcile for data processing
    Consistent with the sample controller workflow, continuously obtain the data in the work queue and call Reconcile for tuning.

Process induction

So far, the main logic of generating code through kubebuilder has been clear. Compared with sample controller, the overall process is similar, but kubebuilder has done a lot of work for us through controller runtime, such as client and cache initialization and controller operation framework. We only need to care about the Reconcile logic.

  1. Initialize the manager and create the client and cache
  2. Create a controller. For listening resources, create a corresponding informer and add a callback function
  3. Start manager, cache and controller

summary

kubebuilder greatly simplifies the process of developing the Operator. Understanding the principle behind it is helpful for us to optimize the Operator and better apply it to production.

quote

[1] https://github.com/kubernetes/sample-controller
[2] https://book.kubebuilder.io/architecture.html
[3] https://developer.aliyun.com/article/719215

28 October 2021, 00:55 | Views: 7496

Add new comment

For adding a comment, please log in
or create account

0 comments