Together, we have a library of components: half of them are <everybody's stubble>'assistant'tools

hello, everyone.We continue to come to the components, and this time we're working together on a very cool component, which is the stubble-finding assistant.The initial idea came from Mr. Huang Wei's iron powder group. The teacher played to find stubble again, and then cut off a picture. At the bottom, he did not know what to say except shouting 666. For the first time, he felt that technology was so close to life.Feel cool, so I studied it for myself, and finally achieved it, and then share it with you.

Thank you here Teacher liuyubobobo bobobo Canvas course, learn a lot of points of knowledge, open your eyes, the original canvas can also play.Say nothing more. Let's get started.

Find stubble implementation steps, then explain in detail:

1. Get screenshot data

2. Find Key Points

3. Compare the two pictures

4. Render to page

1. Get screenshot data

1.1 Get image data for Ctrl + v

The stubble is speed, so Ctrl + v immediately after the screenshot needs to compare the data of the image.First write out the template and the corresponding events:

<template>
  <div>
    <input 
      @paste="pasteImgDate"  // Events triggered after executing ctrl + v when input box is focused
      @blur="onblur"
      readonly
      ref="input" 
      placeholder="ctrl+v Copy Screenshot" 
    />
    <span style="color: red;">{{tips}}</span>  // Prompt
  </div>
</template>

export default {
  data() {
    return {
      tips: ''
    }
  },
  mounted() {
    document.addEventListener("click", this.getFocus);  // Click on the blank focus for speed!
  },
  beforeDestroy() {
    document.removeEventListener("click", this.getFocus);
  },
  methods: {
    getFocus() {
      this.$refs['input'].focus();  
    },
    pasteImgDate(e) {
      const file = e.clipboardData.items[0].getAsFile();  // Get image data
      if(!file) {
        this.tips = "No data to replicate";
        return;
      }
      ...
    },
    onblur() {
      this.tips = ''
    }
  }
}

First, we set a tips variable to prompt for problems encountered in the operation. Then we listen for paste events. This event is triggered by pressing Ctrl + v on the Focus Input Box. In this event object, you can get the data of the corresponding screenshot, which is equivalent to selecting a picture for file-type input, adding a click event to the document and clicking anywhere you like.Get the focus of the input box, all for speed!

1.2 Draw on canvas

And we'll turn this data into base64 and draw it on canvas:

methods: {
  pasteImgDate(e) {
    ...
    const reader = new FileReader();
      reader.readAsDataURL(file);  // Read image data
      reader.onload = e => {
        const img = new Image();
        img.src = e.target.result;  // Get converted base64 format
        img.onload = () => {
          const canvas = document.createElement("canvas"); // Create a canvas tag
          const ctx = canvas.getContext("2d");
          const width = img.width;
          const height = img.height;
          canvas.width = width;
          canvas.height = height;
          ctx.drawImage(img, 0, 0, width, height);  // Draw pictures in canvas Tags
        }
      }
  }
}

Now that you have the picture data, read the file using the new FileReader, convert the base64 format, assign it to an empty img tag, listen to its onload event, and finally draw the picture onto the canvas tag using the drawImage API. The 0,0 in the function indicates the starting x, y position, followed by the drawing size.

2. Find Key Points

2.1 Confirm the pixel information of the target point

The core implementation of this tool is explained here, but it is not complicated. It uses canvas to get all the pixel information of the whole big picture, and then compares the RGB values of each pixel in the two small pictures to find out the difference.

But each picture must have a different snapshot location, but after looking at it, we can also see that there are a lot of regularities, two small pictures are on the same horizontal line, they have the same height and width, they have the same spacing, and finally they have the same background around them.So our first step now is to know where the upper left corner of the small left image is.So I took a picture and put it inside ps and put its top left corner to the maximum:

By taking the values through the straw, we find that the RGB values of the points touching the picture are 80, 148, 176, OK, here we go, starting with the x-axis of the big picture, and the first point must be here.Next, write the following code:

methods: {
  pasteImgDate(e) {
    ...
    ctx.drawImage(img, 0, 0, width, height); //Previous Code
    
    const imgData = ctx.getImageData(0, 0, width, height);
    const pixelData = imgData.data;
  }
}

2.2 Search by traversal based on conditions

Previously drawImage was used to draw pictures into canvas. Now we read the data in this canvas through getImageData, and the pixel values are stored in the data property.Their arrangement is a one-dimensional array, put one Teacher liuyubobobo bobobo Screenshots from the canvas course:

As explained here, the pixel values read from canvas are a one-dimensional array, with each four values representing the RGBA of one pixel.So there are two ways to traverse this array:

The first is a single-loop sequential traversal, from the first node to the last pixel point in turn, like this:

for(let i = 0; width * height; i++) {
 const r = pixelData[4 * i + 0]; // r-channel value of i pixel
 const g = pixelData[4 * i + 1]; // g-channel value of i pixel
 const b = pixelData[4 * i + 2]; // b-channel value of i pixel
}

The second is a two-loop sequential traversal, where you know you have traversed a row or a column, like this:

for(let y = 0; y < height; y++) {
  for(let x = 0; x < width; x++) {
    const p = y * width + x;  // Get y column x row
    const r = pixelData[p * 4 + 0]; // r-channel value of y-column x-row pixels
    const g = pixelData[p * 4 + 1]; // g-channel value of y-column x-row pixels
    const b = pixelData[p * 4 + 2]; // b-channel value of y-column x-row pixels
  }
}

Here, if you change the RGB value of a pixel and then put the changed array back into canvas, a new image will be generated, and you know this, you can achieve a lot of interesting filters.Next, let's use the second cycle to find out which key point:

export default {
  created() {
    this.imgPos = {};  // Record the location of the found point
  },
  methods: {
    handleChenge(e) {
    ...
      for (let y = 0; y < 200; y++) {  // Set 200 to narrow down search
        for (let x = 0; x < 200; x++) {
          function rgbAddUp(pos) {
            return (
              pixelData[4 * pos + 0] +
                pixelData[4 * pos + 1] +
                pixelData[4 * pos + 2] ===
              404  // 80 + 148 + 176 404? Is it intentional
            );
          }
          const p = rgbAddUp(y * img.width + x);  // Current Point
          const top = rgbAddUp((y - 1) * img.width + x);  // Top Point
          const right = rgbAddUp(y * img.width + x + 1);  // Right Point
          const bottom = rgbAddUp((y + 1) * img.width + x);  // Point below
          const left = rgbAddUp(y * img.width + x - 1);  // Point to the left
          const rightTop = rgbAddUp((y - 1) * img.width + x + 1);  // Top Right Point
          const leftBottom = rgbAddUp((y + 1) * img.width + x - 1);  // Sitting Point
          if (
            p &&
            top &&
            left &&
            bottom &&
            rightTop &&
            leftBottom &&
            !right
          ) {
            if (!this.imgPos.y && !this.imgPos.x) {
              this.imgPos.y = y;
              this.imgPos.x = x;
              break;
            }
          }
        }
        if (this.imgPos.y && this.imgPos.x) {
          break;
        }
      }
    }
  }
}

Why set 200 here to narrow the traversal range, after all, there are too many pixels to avoid a page pause, so the screenshot will be somewhat required, only traverse the 200 pixel range in the upper left corner of the screenshot.

Previously, the traversed y and x represent the point where the current column and row intersect, so we can get the pixel information of the point around it. If the RGB channels of the points around it add up to 404, that is, the points marked in the picture, and the right side is not, such X and y are what we want, find them, and exit the cycle.

3. Compare the two pictures

3.1 Find points that match

Now that we have found the key point, we will provide you with some fixed data measured by ps.The height and width of the small image are 286 and 381, and the distance from the first leftmost to the second leftmost is 457.With these fixed values, we can walk through the pixel information of two pictures at the same time to find out where they are different:

pasteImgDate(e) {
  ...
  for (let y = this.imgPos.y; y < 286 + this.imgPos.y; y++) {
    for (let x = this.imgPos.x; x < 381 + this.imgPos.x; x++) {
      if (
        pixelData[(y * img.width + x) * 4 + 0] + 10 >
        pixelData[(y * img.width + x + 457) * 4 + 0] &&
        pixelData[(y * img.width + x) * 4 + 1] + 10 >
        pixelData[(y * img.width + x + 457) * 4 + 1] &&
        pixelData[(y * img.width + x) * 4 + 2] + 10 >
        pixelData[(y * img.width + x + 457) * 4 + 2]
      ) {
        pixelData[(y * img.width + x + 457) * 4 + 0] = 0;
        pixelData[(y * img.width + x + 457) * 4 + 1] = 0;
        pixelData[(y * img.width + x + 457) * 4 + 2] = 0;
      }
    }
  }
}

Why not use!==to judge the difference between two pixel points, because the two pictures are not just stubble-finding places, we found that there are too many different places, there are small fluctuations of pixels, so use!==It can not be very accurate reflected on the pictures found.So change the criteria and set the RGB of the same point to 0, that is, to black.

4. Render to page

4.1 added to canvas

The pixel information has been modified, so now let's put it in the canvas tag and then the canvas tag in the body.

pasteImgDate(e) {
  ...
  if (!this.imgPos.y && !this.imgPos.x) {
    this.tips = "Screenshot does not match";
    return;
  }
  delete this.imgPos.y;  // remove
  delete this.imgPos.x;
  const canDraw = document.getElementById("__canvas_diff_");
  canDraw && document.body.removeChild(canDraw);
  const canvas2 = document.createElement("canvas");
  const ctx2 = canvas2.getContext("2d");
  canvas2.id = "__canvas_diff_";
  canvas2.width = width;
  canvas2.height = height;
  ctx2.putImageData(imgData, 0, 0, 0, 0, width, height);
  document.body.appendChild(canvas2);
}

Component Installation

npm i vue-gn-components

import { FindDIff } from 'vue-gn-components';
import "vue-gn-components/lib/style/index.css";
Vue.use(FindDIff)

Component Call

<template>
  <find-diff />
</template>

Last

  • If using the qq screenshot tool, make sure that the screenshot is in png format, as jpg's pixels are lossy to compress.The first time you can save a png locally, remember your choice each time.

  • This tool has now been written.Show us how to write this tool together with your sister ~Source Location>> vue-gn-components , the project has screenshot pictures for easy testing.Feel OK, please give me a start, which is also the power for me to keep updating ~

Tags: Front-end Vue snapshot npm

Posted on Thu, 09 Jan 2020 23:01:20 -0500 by Sheen