Practice of high power screen adaptation scheme at PC end

  1. home page
  2. special column
  3. Weili hypothesis
  4. Article details
0

Practice of high power screen adaptation scheme at PC end

Wildwood Published 13 minutes ago

Project background

With the development of PC screen, higher multiple screens have gradually appeared on the PC end. Compared with the Retina screen on the mobile phone end, multiple adaptation requirements have also appeared on the PC end. This paper is mainly a practical summary of the high-power screen adaptation scheme on the PC end, hoping to provide some ideas for students who need to adapt to high-power screens on the PC end

Principle analysis

With the development of screen technology, more and more PC devices are equipped with large-size high-definition screens. For web applications that only need to be implemented on the PC, we need to consider the adaptation principles related to mobile applications similar to the mobile terminal. Let's take a look at the principle of high-definition screens on the mobile terminal. In the paper media era, we often use DPI(Dot Per Inch) That is, the dot density is used to describe the printing accuracy of printed products. For mobile devices, when iPhone4s, apple proposed a so-called Retina screen concept, that is, a higher density image information description is realized through different pixel densities on the unit screen, that is, the same size screen but different pixel densities, The display of high-definition screen can be achieved through the proportional conversion between logical pixels and physical pixels, that is, PPI(Pixcels Per Inch) is different. As shown in the above figure, for the same detail description, more details can be presented through more pixels, and the picture will be more delicate. Based on this, let's take a look at a common adaptation scheme on the mobile terminal

For UI design, in the process of mobile terminal design, we often need to consider the design of iOS and Android. In addition to the differences between basic interactive operations, the design adaptation schemes of the two are also frequently asked in UI interviews. For UI design, we always hope that the perception of user touch should be basically the same for the same application, In addition to the specific interaction and display style of the system, the differences between platforms should be smoothed as much as possible. Therefore, generally speaking, we usually adapt in 750x1334(iOS @2x) and 720X1280(Android @2x). For the Web on the PC side, we only need to design a size and simulate the requirements for realizing Retina. Based on this, we need to investigate the PC side adaptation strategies to be considered

adopt Baidu Traffic Research Institute , we can conclude that the resolution required for adaptation is:

resolving powersharemultiple
1920x108044.46%@1x
1366x7689.37%@1x
1536x8648.24%@1x
1440x9007.85%@1x
1600x9007.85%@1x
2560x1440--@2x
3840x2160--@4x
4096x2160--@4x

Finally, through the product research scheme, we decided to take 1366x768 as the main screen design, and then we processed the compatibility of each screen through the grid layout

Scheme selection

For multi terminal resolution adaptation, our common schemes are

programmeadvantageshortcoming
Media queryMedia based screen configurationFor each set of screens, you need to write a set of styles
rem + media queryOnly need to change the root font, convergence control rangeUnit conversion is required for the design draft
vw/vhChanges based on changes in the windowThe design unit needs to be converted, and the browser compatibility is not as good as rem

Finally, considering the compatibility, we decided to use the rem + media query scheme for high-power screen adaptation. However, if the unit rewriting is completely based on rem, there needs to be a certain amount of calculation for the change from the design draft to the development view. At this time, we thought of using front-end engineering for unified magic change to improve DX (development experience)

Case practice

We use postcss to transform CSS code. For flexible configuration and project use, we refer to px2rem to implement a pc-side px2rem class, and then implement a custom postcss plug-in

Pcx2rem

// Pcx2rem
const css = require("css");
const extend = require("extend");

const pxRegExp = /\b(\d+(\.\d+)?)px\b/;

class Pcx2rem {
  constructor(config) {
    this.config = {};
    this.config = extend(
      this.config,
      {
        baseDpr: 1, // Device pixel ratio
        remUnit: 10, // Custom rem units
        remPrecision: 6, // accuracy
        forcePxComment: "px", // Convert px only
        keepComment: "no", // Retain unit
        ignoreEntry: null, //  Ignore rule instance carrier
      },
      config
    );
  }
  generateRem(cssText) {
    const self = this;
    const config = self.config;
    const astObj = css.parse(cssText);

    function processRules(rules, noDealPx) {
      for (let i = 0; i < rules.length; i++) {
        let rule = rules[i];
        if (rule.type === "media") {
          processRules(rule.rules);
          continue;
        } else if (rule.type === "keyframes") {
          processRules(rule.keyframes, true);
          continue;
        } else if (rule.type !== "rule" && rule.type !== "keyframe") {
          continue;
        }

        // Processing px to rem conversion
        let declarations = rule.declarations;
        for (let j = 0; j < declarations.length; j++) {
          let declaration = declarations[j];
          // Convert px
          if (
            declaration.type === "declaration" &&
            pxRegExp.test(declaration.value)
          ) {
            let nextDeclaration = declarations[j + 1];
            if (nextDeclaration && nextDeclaration.type === "comment") {
              if (nextDeclaration.comment.trim() === config.forcePxComment) {
                // Do not convert ` 0px`
                if (declaration.value === "0px") {
                  declaration.value = "0";
                  declarations.splice(j + 1, 1);
                  continue;
                }
                declaration.value = self._getCalcValue(
                  "rem",
                  declaration.value
                );
                declarations.splice(j + 1, 1);
              } else if (
                nextDeclaration.comment.trim() === config.keepComment
              ) {
                declarations.splice(j + 1, 1);
              } else {
                declaration.value = self._getCalcValue(
                  "rem",
                  declaration.value
                );
              }
            } else {
              declaration.value = self._getCalcValue("rem", declaration.value);
            }
          }
        }

        if (!rules[i].declarations.length) {
          rules.splice(i, 1);
          i--;
        }
      }
    }

    processRules(astObj.stylesheet.rules);

    return css.stringify(astObj);
  }
  _getCalcValue(type, value, dpr) {
    const config = this.config;

    // Verify that the ignore rule is met
    if (config.ignoreEntry && config.ignoreEntry.test(value)) {
      return config.ignoreEntry.getRealPx(value);
    }

    const pxGlobalRegExp = new RegExp(pxRegExp.source, "g");

    function getValue(val) {
      val = parseFloat(val.toFixed(config.remPrecision)); // Precision control
      return val === 0 ? val : val + type;
    }

    return value.replace(pxGlobalRegExp, function ($0, $1) {
      return type === "px"
        ? getValue(($1 * dpr) / config.baseDpr)
        : getValue($1 / config.remUnit);
    });
  }
}

module.exports = Pcx2rem;

postCssPlugins

const postcss = require("postcss");
const Pcx2rem = require("./libs/Pcx2rem");
const PxIgnore = require("./libs/PxIgnore");

const postcss_pcx2rem = postcss.plugin("postcss-pcx2rem", function (options) {
  return function (css, result) {
    // Configure parameter merge ignore policy method
    options.ignoreEntry = new PxIgnore();
    // new an instance of Pcx2rem
    const pcx2rem = new Pcx2rem(options);
    const oldCssText = css.toString();
    const newCssText = pcx2rem.generateRem(oldCssText);
    result.root = postcss.parse(newCssText);
  };
});

module.exports = {
  "postcss-pcx2rem": postcss_pcx2rem,
};

vue.config.js

// vue-cli3 is embedded with postcss. You only need to write in the corresponding config output
const {postCssPlugins} = require('./build');

module.exports = {
    ...
    css: {
        loaderOptions: {
            postcss: {
                plugins: [
                    postCssPlugins['postcss-pcx2rem']({
                        baseDpr: 1,
                        // html basic fontSize design draft size screen size
                        remUnit: (10 * 1366) / 1920,
                        remPrecision: 6,
                        forcePxComment: "px",
                        keepComment: "no"
                    })
                ]
            }
        }
    }
    ...
}

Source code analysis

For postcss, many people analyze it as a post processor. Its essence is actually a CSS syntax converter, or the front end of the compiler. Unlike scss/less and other preprocessors, it does not convert the custom language DSL. As can be seen from the above figure, the processing method of postcss is to parse CSS through Parser, then through plug-in, and finally output new CSS after stringer. It adopts streaming processing method and provides nextToken() and back methods. Let's take a look at the core modules one by one

parser

The implementation of parser can be roughly divided into two types: one is ast conversion by writing files, such as Rework analyzer ; The other is the method used by postcss. Word segmentation after lexical analysis, ast, babel and cstree are all such processing schemes

class Parser {
  constructor(input) {
    this.input = input

    this.root = new Root()
    this.current = this.root
    this.spaces = ''
    this.semicolon = false
    this.customProperty = false

    this.createTokenizer()
    this.root.source = { input, start: { offset: 0, line: 1, column: 1 } }
  }

  createTokenizer() {
    this.tokenizer = tokenizer(this.input)
  }

  parse() {
    let token
    while (!this.tokenizer.endOfFile()) {
      token = this.tokenizer.nextToken()

      switch (token[0]) {
        case 'space':
          this.spaces += token[1]
          break

        case ';':
          this.freeSemicolon(token)
          break

        case '}':
          this.end(token)
          break

        case 'comment':
          this.comment(token)
          break

        case 'at-word':
          this.atrule(token)
          break

        case '{':
          this.emptyRule(token)
          break

        default:
          this.other(token)
          break
      }
    }
    this.endFile()
  }

  comment(token) {
    // notes
  }

  emptyRule(token) {
    // Empty token
  }

  other(start) {
    // Handling of other situations
  }

  rule(tokens) {
    // Match token
  }

  decl(tokens, customProperty) {
    // Description of token
  }

  atrule(token) {
    // Rule verification
  }

  end(token) {
    if (this.current.nodes && this.current.nodes.length) {
      this.current.raws.semicolon = this.semicolon
    }
    this.semicolon = false

    this.current.raws.after = (this.current.raws.after || '') + this.spaces
    this.spaces = ''

    if (this.current.parent) {
      this.current.source.end = this.getPosition(token[2])
      this.current = this.current.parent
    } else {
      this.unexpectedClose(token)
    }
  }

  endFile() {
    if (this.current.parent) this.unclosedBlock()
    if (this.current.nodes && this.current.nodes.length) {
      this.current.raws.semicolon = this.semicolon
    }
    this.current.raws.after = (this.current.raws.after || '') + this.spaces
  }

  init(node, offset) {
    this.current.push(node)
    node.source = {
      start: this.getPosition(offset),
      input: this.input
    }
    node.raws.before = this.spaces
    this.spaces = ''
    if (node.type !== 'comment') this.semicolon = false
  }

  raw(node, prop, tokens) {
    let token, type
    let length = tokens.length
    let value = ''
    let clean = true
    let next, prev
    let pattern = /^([#.|])?(\w)+/i

    for (let i = 0; i < length; i += 1) {
      token = tokens[i]
      type = token[0]

      if (type === 'comment' && node.type === 'rule') {
        prev = tokens[i - 1]
        next = tokens[i + 1]

        if (
          prev[0] !== 'space' &&
          next[0] !== 'space' &&
          pattern.test(prev[1]) &&
          pattern.test(next[1])
        ) {
          value += token[1]
        } else {
          clean = false
        }

        continue
      }

      if (type === 'comment' || (type === 'space' && i === length - 1)) {
        clean = false
      } else {
        value += token[1]
      }
    }
    if (!clean) {
      let raw = tokens.reduce((all, i) => all + i[1], '')
      node.raws[prop] = { value, raw }
    }
    node[prop] = value
  }
}

stringifier

Used to format output CSS text

const DEFAULT_RAW = {
  colon: ': ',
  indent: '    ',
  beforeDecl: '\n',
  beforeRule: '\n',
  beforeOpen: ' ',
  beforeClose: '\n',
  beforeComment: '\n',
  after: '\n',
  emptyBody: '',
  commentLeft: ' ',
  commentRight: ' ',
  semicolon: false
}

function capitalize(str) {
  return str[0].toUpperCase() + str.slice(1)
}

class Stringifier {
  constructor(builder) {
    this.builder = builder
  }

  stringify(node, semicolon) {
    /* istanbul ignore if */
    if (!this[node.type]) {
      throw new Error(
        'Unknown AST node type ' +
          node.type +
          '. ' +
          'Maybe you need to change PostCSS stringifier.'
      )
    }
    this[node.type](node, semicolon)
  }

  raw(node, own, detect) {
    let value
    if (!detect) detect = own

    // Already had
    if (own) {
      value = node.raws[own]
      if (typeof value !== 'undefined') return value
    }

    let parent = node.parent

    if (detect === 'before') {
      // Hack for first rule in CSS
      if (!parent || (parent.type === 'root' && parent.first === node)) {
        return ''
      }

      // `root` nodes in `document` should use only their own raws
      if (parent && parent.type === 'document') {
        return ''
      }
    }

    // Floating child without parent
    if (!parent) return DEFAULT_RAW[detect]

    // Detect style by other nodes
    let root = node.root()
    if (!root.rawCache) root.rawCache = {}
    if (typeof root.rawCache[detect] !== 'undefined') {
      return root.rawCache[detect]
    }

    if (detect === 'before' || detect === 'after') {
      return this.beforeAfter(node, detect)
    } else {
      let method = 'raw' + capitalize(detect)
      if (this[method]) {
        value = this[method](root, node)
      } else {
        root.walk(i => {
          value = i.raws[own]
          if (typeof value !== 'undefined') return false
        })
      }
    }

    if (typeof value === 'undefined') value = DEFAULT_RAW[detect]

    root.rawCache[detect] = value
    return value
  }

  beforeAfter(node, detect) {
    let value
    if (node.type === 'decl') {
      value = this.raw(node, null, 'beforeDecl')
    } else if (node.type === 'comment') {
      value = this.raw(node, null, 'beforeComment')
    } else if (detect === 'before') {
      value = this.raw(node, null, 'beforeRule')
    } else {
      value = this.raw(node, null, 'beforeClose')
    }

    let buf = node.parent
    let depth = 0
    while (buf && buf.type !== 'root') {
      depth += 1
      buf = buf.parent
    }

    if (value.includes('\n')) {
      let indent = this.raw(node, null, 'indent')
      if (indent.length) {
        for (let step = 0; step < depth; step++) value += indent
      }
    }

    return value
  }
}

tokenize

The transformation format defined by postcss is as follows

.className {
    color: #fff;
}

The token will be in the following format

[
    ["word", ".className", 1, 1, 1, 10]
    ["space", " "]
    ["{", "{", 1, 12]
    ["space", " "]
    ["word", "color", 1, 14, 1, 18]
    [":", ":", 1, 19]
    ["space", " "]
    ["word", "#FFF" , 1, 21, 1, 23]
    [";", ";", 1, 24]
    ["space", " "]
    ["}", "}", 1, 26]
]
const SINGLE_QUOTE = "'".charCodeAt(0)
const DOUBLE_QUOTE = '"'.charCodeAt(0)
const BACKSLASH = '\\'.charCodeAt(0)
const SLASH = '/'.charCodeAt(0)
const NEWLINE = '\n'.charCodeAt(0)
const SPACE = ' '.charCodeAt(0)
const FEED = '\f'.charCodeAt(0)
const TAB = '\t'.charCodeAt(0)
const CR = '\r'.charCodeAt(0)
const OPEN_SQUARE = '['.charCodeAt(0)
const CLOSE_SQUARE = ']'.charCodeAt(0)
const OPEN_PARENTHESES = '('.charCodeAt(0)
const CLOSE_PARENTHESES = ')'.charCodeAt(0)
const OPEN_CURLY = '{'.charCodeAt(0)
const CLOSE_CURLY = '}'.charCodeAt(0)
const SEMICOLON = ';'.charCodeAt(0)
const ASTERISK = '*'.charCodeAt(0)
const COLON = ':'.charCodeAt(0)
const AT = '@'.charCodeAt(0)

const RE_AT_END = /[\t\n\f\r "#'()/;[\\\]{}]/g
const RE_WORD_END = /[\t\n\f\r !"#'():;@[\\\]{}]|\/(?=\*)/g
const RE_BAD_BRACKET = /.[\n"'(/\\]/
const RE_HEX_ESCAPE = /[\da-f]/i

function tokenizer(input, options = {}) {
  let css = input.css.valueOf()
  let ignore = options.ignoreErrors

  let code, next, quote, content, escape
  let escaped, escapePos, prev, n, currentToken

  let length = css.length
  let pos = 0
  let buffer = []
  let returned = []

  function position() {
    return pos
  }

  function unclosed(what) {
    throw input.error('Unclosed ' + what, pos)
  }

  function endOfFile() {
    return returned.length === 0 && pos >= length
  }

  function nextToken(opts) {
    if (returned.length) return returned.pop()
    if (pos >= length) return

    let ignoreUnclosed = opts ? opts.ignoreUnclosed : false

    code = css.charCodeAt(pos)

    switch (code) {
      case NEWLINE:
      case SPACE:
      case TAB:
      case CR:
      case FEED: {
        next = pos
        do {
          next += 1
          code = css.charCodeAt(next)
        } while (
          code === SPACE ||
          code === NEWLINE ||
          code === TAB ||
          code === CR ||
          code === FEED
        )

        currentToken = ['space', css.slice(pos, next)]
        pos = next - 1
        break
      }

      case OPEN_SQUARE:
      case CLOSE_SQUARE:
      case OPEN_CURLY:
      case CLOSE_CURLY:
      case COLON:
      case SEMICOLON:
      case CLOSE_PARENTHESES: {
        let controlChar = String.fromCharCode(code)
        currentToken = [controlChar, controlChar, pos]
        break
      }

      case OPEN_PARENTHESES: {
        prev = buffer.length ? buffer.pop()[1] : ''
        n = css.charCodeAt(pos + 1)
        if (
          prev === 'url' &&
          n !== SINGLE_QUOTE &&
          n !== DOUBLE_QUOTE &&
          n !== SPACE &&
          n !== NEWLINE &&
          n !== TAB &&
          n !== FEED &&
          n !== CR
        ) {
          next = pos
          do {
            escaped = false
            next = css.indexOf(')', next + 1)
            if (next === -1) {
              if (ignore || ignoreUnclosed) {
                next = pos
                break
              } else {
                unclosed('bracket')
              }
            }
            escapePos = next
            while (css.charCodeAt(escapePos - 1) === BACKSLASH) {
              escapePos -= 1
              escaped = !escaped
            }
          } while (escaped)

          currentToken = ['brackets', css.slice(pos, next + 1), pos, next]

          pos = next
        } else {
          next = css.indexOf(')', pos + 1)
          content = css.slice(pos, next + 1)

          if (next === -1 || RE_BAD_BRACKET.test(content)) {
            currentToken = ['(', '(', pos]
          } else {
            currentToken = ['brackets', content, pos, next]
            pos = next
          }
        }

        break
      }

      case SINGLE_QUOTE:
      case DOUBLE_QUOTE: {
        quote = code === SINGLE_QUOTE ? "'" : '"'
        next = pos
        do {
          escaped = false
          next = css.indexOf(quote, next + 1)
          if (next === -1) {
            if (ignore || ignoreUnclosed) {
              next = pos + 1
              break
            } else {
              unclosed('string')
            }
          }
          escapePos = next
          while (css.charCodeAt(escapePos - 1) === BACKSLASH) {
            escapePos -= 1
            escaped = !escaped
          }
        } while (escaped)

        currentToken = ['string', css.slice(pos, next + 1), pos, next]
        pos = next
        break
      }

      case AT: {
        RE_AT_END.lastIndex = pos + 1
        RE_AT_END.test(css)
        if (RE_AT_END.lastIndex === 0) {
          next = css.length - 1
        } else {
          next = RE_AT_END.lastIndex - 2
        }

        currentToken = ['at-word', css.slice(pos, next + 1), pos, next]

        pos = next
        break
      }

      case BACKSLASH: {
        next = pos
        escape = true
        while (css.charCodeAt(next + 1) === BACKSLASH) {
          next += 1
          escape = !escape
        }
        code = css.charCodeAt(next + 1)
        if (
          escape &&
          code !== SLASH &&
          code !== SPACE &&
          code !== NEWLINE &&
          code !== TAB &&
          code !== CR &&
          code !== FEED
        ) {
          next += 1
          if (RE_HEX_ESCAPE.test(css.charAt(next))) {
            while (RE_HEX_ESCAPE.test(css.charAt(next + 1))) {
              next += 1
            }
            if (css.charCodeAt(next + 1) === SPACE) {
              next += 1
            }
          }
        }

        currentToken = ['word', css.slice(pos, next + 1), pos, next]

        pos = next
        break
      }

      default: {
        if (code === SLASH && css.charCodeAt(pos + 1) === ASTERISK) {
          next = css.indexOf('*/', pos + 2) + 1
          if (next === 0) {
            if (ignore || ignoreUnclosed) {
              next = css.length
            } else {
              unclosed('comment')
            }
          }

          currentToken = ['comment', css.slice(pos, next + 1), pos, next]
          pos = next
        } else {
          RE_WORD_END.lastIndex = pos + 1
          RE_WORD_END.test(css)
          if (RE_WORD_END.lastIndex === 0) {
            next = css.length - 1
          } else {
            next = RE_WORD_END.lastIndex - 2
          }

          currentToken = ['word', css.slice(pos, next + 1), pos, next]
          buffer.push(currentToken)
          pos = next
        }

        break
      }
    }

    pos++
    return currentToken
  }

  function back(token) {
    returned.push(token)
  }

  return {
    back,
    nextToken,
    endOfFile,
    position
  }
}

processor

Plug in processing mechanism

class Processor {
    constructor(plugins = []) {
        this.plugins = this.normalize(plugins)
    }
    use(plugin) {

    }
    process(css, opts = {}) {

    }
    normalize(plugins) {
        // Format plug-in
    }
}

node

Processing of transformed ast nodes

class Node {
  constructor(defaults = {}) {
    this.raws = {}
    this[isClean] = false
    this[my] = true

    for (let name in defaults) {
      if (name === 'nodes') {
        this.nodes = []
        for (let node of defaults[name]) {
          if (typeof node.clone === 'function') {
            this.append(node.clone())
          } else {
            this.append(node)
          }
        }
      } else {
        this[name] = defaults[name]
      }
    }
  }

  remove() {
    if (this.parent) {
      this.parent.removeChild(this)
    }
    this.parent = undefined
    return this
  }

  toString(stringifier = stringify) {
    if (stringifier.stringify) stringifier = stringifier.stringify
    let result = ''
    stringifier(this, i => {
      result += i
    })
    return result
  }

  assign(overrides = {}) {
    for (let name in overrides) {
      this[name] = overrides[name]
    }
    return this
  }

  clone(overrides = {}) {
    let cloned = cloneNode(this)
    for (let name in overrides) {
      cloned[name] = overrides[name]
    }
    return cloned
  }

  cloneBefore(overrides = {}) {
    let cloned = this.clone(overrides)
    this.parent.insertBefore(this, cloned)
    return cloned
  }

  cloneAfter(overrides = {}) {
    let cloned = this.clone(overrides)
    this.parent.insertAfter(this, cloned)
    return cloned
  }

  replaceWith(...nodes) {
    if (this.parent) {
      let bookmark = this
      let foundSelf = false
      for (let node of nodes) {
        if (node === this) {
          foundSelf = true
        } else if (foundSelf) {
          this.parent.insertAfter(bookmark, node)
          bookmark = node
        } else {
          this.parent.insertBefore(bookmark, node)
        }
      }

      if (!foundSelf) {
        this.remove()
      }
    }

    return this
  }

  next() {
    if (!this.parent) return undefined
    let index = this.parent.index(this)
    return this.parent.nodes[index + 1]
  }

  prev() {
    if (!this.parent) return undefined
    let index = this.parent.index(this)
    return this.parent.nodes[index - 1]
  }

  before(add) {
    this.parent.insertBefore(this, add)
    return this
  }

  after(add) {
    this.parent.insertAfter(this, add)
    return this
  }

  root() {
    let result = this
    while (result.parent && result.parent.type !== 'document') {
      result = result.parent
    }
    return result
  }

  raw(prop, defaultType) {
    let str = new Stringifier()
    return str.raw(this, prop, defaultType)
  }

  cleanRaws(keepBetween) {
    delete this.raws.before
    delete this.raws.after
    if (!keepBetween) delete this.raws.between
  }

  toJSON(_, inputs) {
    let fixed = {}
    let emitInputs = inputs == null
    inputs = inputs || new Map()
    let inputsNextIndex = 0

    for (let name in this) {
      if (!Object.prototype.hasOwnProperty.call(this, name)) {
        // istanbul ignore next
        continue
      }
      if (name === 'parent' || name === 'proxyCache') continue
      let value = this[name]

      if (Array.isArray(value)) {
        fixed[name] = value.map(i => {
          if (typeof i === 'object' && i.toJSON) {
            return i.toJSON(null, inputs)
          } else {
            return i
          }
        })
      } else if (typeof value === 'object' && value.toJSON) {
        fixed[name] = value.toJSON(null, inputs)
      } else if (name === 'source') {
        let inputId = inputs.get(value.input)
        if (inputId == null) {
          inputId = inputsNextIndex
          inputs.set(value.input, inputsNextIndex)
          inputsNextIndex++
        }
        fixed[name] = {
          inputId,
          start: value.start,
          end: value.end
        }
      } else {
        fixed[name] = value
      }
    }

    if (emitInputs) {
      fixed.inputs = [...inputs.keys()].map(input => input.toJSON())
    }

    return fixed
  }

  positionInside(index) {
    let string = this.toString()
    let column = this.source.start.column
    let line = this.source.start.line

    for (let i = 0; i < index; i++) {
      if (string[i] === '\n') {
        column = 1
        line += 1
      } else {
        column += 1
      }
    }

    return { line, column }
  }

  positionBy(opts) {
    let pos = this.source.start
    if (opts.index) {
      pos = this.positionInside(opts.index)
    } else if (opts.word) {
      let index = this.toString().indexOf(opts.word)
      if (index !== -1) pos = this.positionInside(index)
    }
    return pos
  }

  getProxyProcessor() {
    return {
      set(node, prop, value) {
        if (node[prop] === value) return true
        node[prop] = value
        if (
          prop === 'prop' ||
          prop === 'value' ||
          prop === 'name' ||
          prop === 'params' ||
          prop === 'important' ||
          prop === 'text'
        ) {
          node.markDirty()
        }
        return true
      },

      get(node, prop) {
        if (prop === 'proxyOf') {
          return node
        } else if (prop === 'root') {
          return () => node.root().toProxy()
        } else {
          return node[prop]
        }
      }
    }
  }

  toProxy() {
    if (!this.proxyCache) {
      this.proxyCache = new Proxy(this, this.getProxyProcessor())
    }
    return this.proxyCache
  }

  addToError(error) {
    error.postcssNode = this
    if (error.stack && this.source && /\n\s{4}at /.test(error.stack)) {
      let s = this.source
      error.stack = error.stack.replace(
        /\n\s{4}at /,
        `$&${s.input.from}:${s.start.line}:${s.start.column}$&`
      )
    }
    return error
  }

  markDirty() {
    if (this[isClean]) {
      this[isClean] = false
      let next = this
      while ((next = next.parent)) {
        next[isClean] = false
      }
    }
  }

  get proxyOf() {
    return this
  }
}

summary

The high fidelity restoration of UI design draft is the most basic skill as a front-end engineer, but for the modern front-end, we not only need to consider the solution, but also have engineering thinking, improve the DX (development experience) development experience, reduce costs and increase efficiency. After all, we are front-end engineers, not just front-end developers, and encourage each other!

reference resources

Reading 9 was released 13 minutes ago
Like collection
Weili hypothesis
Standing at the crossroads of technology and art
Focus column
866 reputation
3.2k fans
Focus on the author
Submit comments
You know what?

Register login
866 reputation
3.2k fans
Focus on the author
Article catalog
follow
Billboard

Tags: Front-end UI postcss loader

Posted on Fri, 29 Oct 2021 09:40:42 -0400 by killerofet