ZBar Source Code Analysis--Window Module Resolution| 2021SC@SDUSC

2021SC@SDUSC

Catalog

I. Window Modules

2. Code Analysis

zbar_window_redraw

Brightness Sensing and Exposure

Sensitivity tolerance

Auto exposure and 18% gray

gamma correction

Window Layer Overlay

3. Summary

I. Window Modules

 

  To use ZBar for barcode recognition, the input can be a video stream or an image stream.

In the case of video streaming input, the way we usually take is to open the camera Window for scanner recognition. In the video information captured by the camera, ZBar needs to collect and process a series of video information, such as frame-by-frame capture and so on. The Windows module will give the processed information to other modules for decoding and other processing.

The involvement of the Window s module is also required when using image input, which will be mentioned in subsequent code analysis.

This part of the functionality is not implemented by the Video module, but by the Window module when the ZBar opens the window. This separation of modules makes the structure of the whole project clearer and the division of work among modules clearer.

 

In addition, according to ZBar's project process, you can see that the Window module also has a core function to display images to a user-specified platform-specific output window.

The core code of the ZBar project is mostly in the ZBar folder, while window.h and window.c in the ZBar folder are responsible for the core function implementation of this module, and the specific call implementation is done in the Processor processing module and other api s.

This code analysis will follow the last code analysis to expand the analysis of window.c.

2. Code Analysis

zbar_window_redraw

inline int zbar_window_redraw (zbar_window_t *w)
{
    int rc = 0;
    zbar_image_t *img;
    if(window_lock(w))
        return(-1);
    if(!w->display || _zbar_window_begin(w)) {
        (void)window_unlock(w);
        return(-1);
    }

    img = w->image;
    if(w->init && w->draw_image && img) {
        int format_change = (w->src_format != img->format &&
                             w->format != img->format);
        if(format_change) {
            _zbar_best_format(img->format, &w->format, w->formats);
            if(!w->format)
                rc = err_capture_int(w, SEV_ERROR, ZBAR_ERR_UNSUPPORTED, __func__,
                                     "no conversion from %x to supported formats",
                                     img->format);
            w->src_format = img->format;
        }

        if(!rc && (format_change || !w->scaled_size.x || !w->dst_width)) {
            point_t size = { w->width, w->height };
            zprintf(24, "init: src=%.4s(%08x) %dx%d dst=%.4s(%08x) %dx%d\n",
                    (char*)&w->src_format, w->src_format,
                    w->src_width, w->src_height,
                    (char*)&w->format, w->format,
                    w->dst_width, w->dst_height);
            if(!w->dst_width) {
                w->src_width = img->width;
                w->src_height = img->height;
            }

            if(size.x > w->max_width)
                size.x = w->max_width;
            if(size.y > w->max_height)
                size.y = w->max_height;

            if(size.x * w->src_height < size.y * w->src_width) {
                w->scale_num = size.x;
                w->scale_den = w->src_width;
            }
            else {
                w->scale_num = size.y;
                w->scale_den = w->src_height;
            }

            rc = w->init(w, img, format_change);

            if(!rc) {
                size.x = w->src_width;
                size.y = w->src_height;
                w->scaled_size = size = window_scale_pt(w, size);
                w->scaled_offset.x = ((int)w->width - size.x) / 2;
                w->scaled_offset.y = ((int)w->height - size.y) / 2;
                zprintf(24, "scale: src=%dx%d win=%dx%d by %d/%d => %dx%d @%d,%d\n",
                        w->src_width, w->src_height, w->width, w->height,
                        w->scale_num, w->scale_den,
                        size.x, size.y, w->scaled_offset.x, w->scaled_offset.y);
            }
            else {
                /* unable to display this image */
                _zbar_image_refcnt(img, -1);
                w->image = img = NULL;
            }
        }

        if(!rc &&
           (img->format != w->format ||
            img->width != w->dst_width ||
            img->height != w->dst_height)) {
            /* save *converted* image for redraw */
            zprintf(48, "convert: %.4s(%08x) %dx%d => %.4s(%08x) %dx%d\n",
                    (char*)&img->format, img->format, img->width, img->height,
                    (char*)&w->format, w->format, w->dst_width, w->dst_height);
            w->image = zbar_image_convert_resize(img, w->format,
                                                 w->dst_width, w->dst_height);
            w->image->syms = img->syms;
            if(img->syms)
                zbar_symbol_set_ref(img->syms, 1);
            zbar_image_destroy(img);
            img = w->image;
        }

        if(!rc) {
            point_t org;
            rc = w->draw_image(w, img);

            org = w->scaled_offset;
            if(org.x > 0) {
                point_t p = { 0, org.y };
                point_t s = { org.x, w->scaled_size.y };
                _zbar_window_fill_rect(w, 0, p, s);
                s.x = w->width - w->scaled_size.x - s.x;
                if(s.x > 0) {
                    p.x = w->width - s.x;
                    _zbar_window_fill_rect(w, 0, p, s);
                }
            }
            if(org.y > 0) {
                point_t p = { 0, 0 };
                point_t s = { w->width, org.y };
                _zbar_window_fill_rect(w, 0, p, s);
                s.y = w->height - w->scaled_size.y - s.y;
                if(s.y > 0) {
                    p.y = w->height - s.y;
                    _zbar_window_fill_rect(w, 0, p, s);
                }
            }
        }
        if(!rc)
            rc = window_draw_overlay(w);
    }
    else
        rc = 1;

    if(rc)
        rc = _zbar_window_draw_logo(w);

    _zbar_window_end(w);
    (void)window_unlock(w);
    return(rc);
}

Zbar_mentioned in last code analysis Window_ Draw() function, which implements the function of output image to a platform or interface. This function re-renders the last image, which can be interpreted as zbar_ Window_ Further operations of draw ().

By comparing the contents of the code, you can see that zbar_window_redraw() relative to zbar_window_draw() adds the function of the image exposure part.

The algorithms involved in the code are described below.

Brightness Sensing and Exposure

Sensitivity tolerance

From the brightest to the darkest, if the human eye can see a certain range, then the film (or electronic photosensitive devices such as CCD) can show a much smaller range than the human eye, and this limited range is the photosensitivity tolerance.

Auto exposure and 18% gray

For sensor, how can you tell if the exposure is correct? It is standard practice to calculate the mean of the Y value of the current image in YUV space. By adjusting various exposure parameter settings (automatic or manual) so that the mean falls near a target value, the correct exposure is considered.

So how do you determine the mean of this Y and adjust the parameters so that the sensor can adjust the brightness of the current image to this range?

This involves the concept of 18% gray, which is generally thought to be an indoor or outdoor scene with an average reflectivity of about 18% and a mean color, which can be considered a medium gray hue as described earlier. In this way, you can adjust the exposure parameters by taking a gray plate with a reflectivity of 18% so that the color approaches medium gray (Y value 128).

gamma correction

The average value of exposure is correct, which does not mean that the overall image brightness distribution is consistent with what the human eye sees.

In fact, the human eye's response to brightness is not a linear proportional relationship, and the input-output characteristic curves of various devices involved in photoelectric conversion are also generally non-linear and in the form of a power function, so the transfer function of the whole image system is a power function:   G = g   1 ×  G   2 ×…×  G   n

For sensor s, the response is nearly linear, so corrections are needed to correctly output images on a variety of devices that match the human eye's response to brightness.

The inverse of the exponent of the power function is commonly referred to as the gamma value.

Normalized gamma curve

In fact, when a sensor makes a gamma correction, it usually also converts 10-bit data in the raw format to 8-bit data, so the formula at this time can be expressed as contrast.

Window Layer Overlay

static inline int window_draw_overlay (zbar_window_t *w)
{
    if(!w->overlay)
        return(0);
    if(w->overlay >= 1 && w->image && w->image->syms) {
        /* FIXME outline each symbol */
        const zbar_symbol_t *sym = w->image->syms->head;
        for(; sym; sym = sym->next) {
            uint32_t color = ((sym->cache_count < 0) ? 4 : 2);
            if(sym->type == ZBAR_QRCODE)
                window_outline_symbol(w, color, sym);
            else {
                /* FIXME linear bbox broken */
                point_t org = w->scaled_offset;
                int i;
                for(i = 0; i < sym->npts; i++) {
                    point_t p = window_scale_pt(w, sym->pts[i]);
                    p.x += org.x;
                    p.y += org.y;
                    if(p.x < 3)
                        p.x = 3;
                    else if(p.x > w->width - 4)
                        p.x = w->width - 4;
                    if(p.y < 3)
                        p.y = 3;
                    else if(p.y > w->height - 4)
                        p.y = w->height - 4;
                    _zbar_window_draw_marker(w, color, p);
                }
            }
        }
    }

    if(w->overlay >= 2) {
        /* calculate/display frame rate */
        unsigned long time = _zbar_timer_now();
        if(w->time) {
            int avg = w->time_avg = (w->time_avg + time - w->time) / 2;
            point_t p = { -8, -1 };
            char text[32];
            sprintf(text, "%d.%01d fps", 1000 / avg, (10000 / avg) % 10);
            _zbar_window_draw_text(w, 3, p, text);
        }
        w->time = time;
    }
    return(0);
}

A complete image may consist of several layers of different sizes. If these layers are rendered as a complete image by the computer, they need to be rendered in ascending order according to the Z-value of the layer (which can also be interpreted as a distance-proximity relationship, the larger the z-value, the more visually visible the layer will be), so the general rendering process is to sort the layers in ascending order according to the z-value, and then render them in turn from the layer with the lowest z-value. Each layer is a fixed-size rectangle (even though we see shapes in life, they are actually contained in a rectangular area called the RGBA Canvas).

  The relationship model between layers can be seen as that the upper layer (with a large z-value) depends on the lower layer (with a small z-value). By convention, upper layers can only depend on lower layers (in algorithm implementation, upper layers only depend on one lower layer in order to make the relationship between layers simpler), so there is no closed loop, but a layer can be depended on by multiple upper layers.

3. Summary

This code analysis describes several functions of the Window module and algorithmic analysis. If there are any deficiencies, please correct them.

Tags: C

Posted on Sun, 05 Dec 2021 17:42:57 -0500 by gdure