Changing front

This commit is contained in:
2023-01-16 17:44:37 +01:00
parent 0b8a93b256
commit 4fe4be7730
48586 changed files with 4725790 additions and 17464 deletions

15
front/app/node_modules/read-package-json/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) Isaac Z. Schlueter
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.

157
front/app/node_modules/read-package-json/README.md generated vendored Normal file
View File

@@ -0,0 +1,157 @@
# read-package-json
This is the thing that npm uses to read package.json files. It
validates some stuff, and loads some default things.
It keeps a cache of the files you've read, so that you don't end
up reading the same package.json file multiple times.
Note that if you just want to see what's literally in the package.json
file, you can usually do `var data = require('some-module/package.json')`.
This module is basically only needed by npm, but it's handy to see what
npm will see when it looks at your package.
## Usage
```javascript
var readJson = require('read-package-json')
// readJson(filename, [logFunction=noop], [strict=false], cb)
readJson('/path/to/package.json', console.error, false, function (er, data) {
if (er) {
console.error("There was an error reading the file")
return
}
console.error('the package data is', data)
});
```
## readJson(file, [logFn = noop], [strict = false], cb)
* `file` {String} The path to the package.json file
* `logFn` {Function} Function to handle logging. Defaults to a noop.
* `strict` {Boolean} True to enforce SemVer 2.0 version strings, and
other strict requirements.
* `cb` {Function} Gets called with `(er, data)`, as is The Node Way.
Reads the JSON file and does the things.
## `package.json` Fields
See `man 5 package.json` or `npm help json`.
## readJson.log
By default this is a reference to the `npmlog` module. But if that
module can't be found, then it'll be set to just a dummy thing that does
nothing.
Replace with your own `{log,warn,error}` object for fun loggy time.
## readJson.extras(file, data, cb)
Run all the extra stuff relative to the file, with the parsed data.
Modifies the data as it does stuff. Calls the cb when it's done.
## readJson.extraSet = [fn, fn, ...]
Array of functions that are called by `extras`. Each one receives the
arguments `fn(file, data, cb)` and is expected to call `cb(er, data)`
when done or when an error occurs.
Order is indeterminate, so each function should be completely
independent.
Mix and match!
## Other Relevant Files Besides `package.json`
Some other files have an effect on the resulting data object, in the
following ways:
### `README?(.*)`
If there is a `README` or `README.*` file present, then npm will attach
a `readme` field to the data with the contents of this file.
Owing to the fact that roughly 100% of existing node modules have
Markdown README files, it will generally be assumed to be Markdown,
regardless of the extension. Please plan accordingly.
### `server.js`
If there is a `server.js` file, and there is not already a
`scripts.start` field, then `scripts.start` will be set to `node
server.js`.
### `AUTHORS`
If there is not already a `contributors` field, then the `contributors`
field will be set to the contents of the `AUTHORS` file, split by lines,
and parsed.
### `bindings.gyp`
If a bindings.gyp file exists, and there is not already a
`scripts.install` field, then the `scripts.install` field will be set to
`node-gyp rebuild`.
### `index.js`
If the json file does not exist, but there is a `index.js` file
present instead, and that file has a package comment, then it will try
to parse the package comment, and use that as the data instead.
A package comment looks like this:
```javascript
/**package
* { "name": "my-bare-module"
* , "version": "1.2.3"
* , "description": "etc...." }
**/
// or...
/**package
{ "name": "my-bare-module"
, "version": "1.2.3"
, "description": "etc...." }
**/
```
The important thing is that it starts with `/**package`, and ends with
`**/`. If the package.json file exists, then the index.js is not
parsed.
### `{directories.man}/*.[0-9]`
If there is not already a `man` field defined as an array of files or a
single file, and
there is a `directories.man` field defined, then that directory will
be searched for manpages.
Any valid manpages found in that directory will be assigned to the `man`
array, and installed in the appropriate man directory at package install
time, when installed globally on a Unix system.
### `{directories.bin}/*`
If there is not already a `bin` field defined as a string filename or a
hash of `<name> : <filename>` pairs, then the `directories.bin`
directory will be searched and all the files within it will be linked as
executables at install time.
When installing locally, npm links bins into `node_modules/.bin`, which
is in the `PATH` environ when npm runs scripts. When
installing globally, they are linked into `{prefix}/bin`, which is
presumably in the `PATH` environment variable.
### `types` field
If you do not have a `types` field, then it will check if a
corresponding `*.d.ts` file exists for your package entry file and add
it to the `package.json`.

View File

@@ -0,0 +1,605 @@
var fs = require('fs')
var path = require('path')
var glob = require('glob')
var normalizeData = require('normalize-package-data')
var safeJSON = require('json-parse-even-better-errors')
var util = require('util')
var normalizePackageBin = require('npm-normalize-package-bin')
module.exports = readJson
// put more stuff on here to customize.
readJson.extraSet = [
bundleDependencies,
gypfile,
serverjs,
scriptpath,
authors,
readme,
mans,
bins,
githead,
fillTypes,
]
var typoWarned = {}
var cache = {}
function readJson (file, log_, strict_, cb_) {
var log, strict, cb
for (var i = 1; i < arguments.length - 1; i++) {
if (typeof arguments[i] === 'boolean') {
strict = arguments[i]
} else if (typeof arguments[i] === 'function') {
log = arguments[i]
}
}
if (!log) {
log = function () {}
}
cb = arguments[arguments.length - 1]
readJson_(file, log, strict, cb)
}
function readJson_ (file, log, strict, cb) {
fs.readFile(file, 'utf8', function (er, d) {
parseJson(file, er, d, log, strict, cb)
})
}
function stripBOM (content) {
// Remove byte order marker. This catches EF BB BF (the UTF-8 BOM)
// because the buffer-to-string conversion in `fs.readFileSync()`
// translates it to FEFF, the UTF-16 BOM.
if (content.charCodeAt(0) === 0xFEFF) {
content = content.slice(1)
}
return content
}
function jsonClone (obj) {
if (obj == null) {
return obj
} else if (Array.isArray(obj)) {
var newarr = new Array(obj.length)
for (var ii in obj) {
newarr[ii] = obj[ii]
}
} else if (typeof obj === 'object') {
var newobj = {}
for (var kk in obj) {
newobj[kk] = jsonClone[kk]
}
} else {
return obj
}
}
function parseJson (file, er, d, log, strict, cb) {
if (er && er.code === 'ENOENT') {
return fs.stat(path.dirname(file), function (err, stat) {
if (!err && stat && !stat.isDirectory()) {
// ENOTDIR isn't used on Windows, but npm expects it.
er = Object.create(er)
er.code = 'ENOTDIR'
return cb(er)
} else {
return indexjs(file, er, log, strict, cb)
}
})
}
if (er) {
return cb(er)
}
if (cache[d]) {
return cb(null, jsonClone(cache[d]))
}
var data
try {
data = safeJSON(stripBOM(d))
for (var key in data) {
if (/^_/.test(key)) {
delete data[key]
}
}
} catch (jsonErr) {
data = parseIndex(d)
if (!data) {
return cb(parseError(jsonErr, file))
}
}
extrasCached(file, d, data, log, strict, cb)
}
function extrasCached (file, d, data, log, strict, cb) {
extras(file, data, log, strict, function (err, extrasData) {
if (!err) {
cache[d] = jsonClone(extrasData)
}
cb(err, extrasData)
})
}
function indexjs (file, er, log, strict, cb) {
if (path.basename(file) === 'index.js') {
return cb(er)
}
var index = path.resolve(path.dirname(file), 'index.js')
fs.readFile(index, 'utf8', function (er2, d) {
if (er2) {
return cb(er)
}
if (cache[d]) {
return cb(null, cache[d])
}
var data = parseIndex(d)
if (!data) {
return cb(er)
}
extrasCached(file, d, data, log, strict, cb)
})
}
readJson.extras = extras
function extras (file, data, log_, strict_, cb_) {
var log, strict, cb
for (var i = 2; i < arguments.length - 1; i++) {
if (typeof arguments[i] === 'boolean') {
strict = arguments[i]
} else if (typeof arguments[i] === 'function') {
log = arguments[i]
}
}
if (!log) {
log = function () {}
}
cb = arguments[i]
var set = readJson.extraSet
var n = set.length
var errState = null
set.forEach(function (fn) {
fn(file, data, then)
})
function then (er) {
if (errState) {
return
}
if (er) {
return cb(errState = er)
}
if (--n > 0) {
return
}
final(file, data, log, strict, cb)
}
}
function scriptpath (file, data, cb) {
if (!data.scripts) {
return cb(null, data)
}
var k = Object.keys(data.scripts)
k.forEach(scriptpath_, data.scripts)
cb(null, data)
}
function scriptpath_ (key) {
var s = this[key]
// This is never allowed, and only causes problems
if (typeof s !== 'string') {
return delete this[key]
}
var spre = /^(\.[/\\])?node_modules[/\\].bin[\\/]/
if (s.match(spre)) {
this[key] = this[key].replace(spre, '')
}
}
function gypfile (file, data, cb) {
var dir = path.dirname(file)
var s = data.scripts || {}
if (s.install || s.preinstall) {
return cb(null, data)
}
glob('*.gyp', { cwd: dir }, function (er, files) {
if (er) {
return cb(er)
}
if (data.gypfile === false) {
return cb(null, data)
}
gypfile_(file, data, files, cb)
})
}
function gypfile_ (file, data, files, cb) {
if (!files.length) {
return cb(null, data)
}
var s = data.scripts || {}
s.install = 'node-gyp rebuild'
data.scripts = s
data.gypfile = true
return cb(null, data)
}
function serverjs (file, data, cb) {
var dir = path.dirname(file)
var s = data.scripts || {}
if (s.start) {
return cb(null, data)
}
glob('server.js', { cwd: dir }, function (er, files) {
if (er) {
return cb(er)
}
serverjs_(file, data, files, cb)
})
}
function serverjs_ (file, data, files, cb) {
if (!files.length) {
return cb(null, data)
}
var s = data.scripts || {}
s.start = 'node server.js'
data.scripts = s
return cb(null, data)
}
function authors (file, data, cb) {
if (data.contributors) {
return cb(null, data)
}
var af = path.resolve(path.dirname(file), 'AUTHORS')
fs.readFile(af, 'utf8', function (er, ad) {
// ignore error. just checking it.
if (er) {
return cb(null, data)
}
authors_(file, data, ad, cb)
})
}
function authors_ (file, data, ad, cb) {
ad = ad.split(/\r?\n/g).map(function (line) {
return line.replace(/^\s*#.*$/, '').trim()
}).filter(function (line) {
return line
})
data.contributors = ad
return cb(null, data)
}
function readme (file, data, cb) {
if (data.readme) {
return cb(null, data)
}
var dir = path.dirname(file)
var globOpts = { cwd: dir, nocase: true, mark: true }
glob('{README,README.*}', globOpts, function (er, files) {
if (er) {
return cb(er)
}
// don't accept directories.
files = files.filter(function (filtered) {
return !filtered.match(/\/$/)
})
if (!files.length) {
return cb()
}
var fn = preferMarkdownReadme(files)
var rm = path.resolve(dir, fn)
readme_(file, data, rm, cb)
})
}
function preferMarkdownReadme (files) {
var fallback = 0
var re = /\.m?a?r?k?d?o?w?n?$/i
for (var i = 0; i < files.length; i++) {
if (files[i].match(re)) {
return files[i]
} else if (files[i].match(/README$/)) {
fallback = i
}
}
// prefer README.md, followed by README; otherwise, return
// the first filename (which could be README)
return files[fallback]
}
function readme_ (file, data, rm, cb) {
var rmfn = path.basename(rm)
fs.readFile(rm, 'utf8', function (er, rmData) {
// maybe not readable, or something.
if (er) {
return cb()
}
data.readme = rmData
data.readmeFilename = rmfn
return cb(er, data)
})
}
function mans (file, data, cb) {
let cwd = data.directories && data.directories.man
if (data.man || !cwd) {
return cb(null, data)
}
const dirname = path.dirname(file)
cwd = path.resolve(path.dirname(file), cwd)
glob('**/*.[0-9]', { cwd }, function (er, mansGlob) {
if (er) {
return cb(er)
}
data.man = mansGlob.map(man =>
path.relative(dirname, path.join(cwd, man)).split(path.sep).join('/')
)
return cb(null, data)
})
}
function bins (file, data, cb) {
data = normalizePackageBin(data)
var m = data.directories && data.directories.bin
if (data.bin || !m) {
return cb(null, data)
}
m = path.resolve(path.dirname(file), m)
glob('**', { cwd: m }, function (er, binsGlob) {
if (er) {
return cb(er)
}
bins_(file, data, binsGlob, cb)
})
}
function bins_ (file, data, binsGlob, cb) {
var m = (data.directories && data.directories.bin) || '.'
data.bin = binsGlob.reduce(function (acc, mf) {
if (mf && mf.charAt(0) !== '.') {
var f = path.basename(mf)
acc[f] = path.join(m, mf)
}
return acc
}, {})
return cb(null, normalizePackageBin(data))
}
function bundleDependencies (file, data, cb) {
var bd = 'bundleDependencies'
var bdd = 'bundledDependencies'
// normalize key name
if (data[bdd] !== undefined) {
if (data[bd] === undefined) {
data[bd] = data[bdd]
}
delete data[bdd]
}
if (data[bd] === false) {
delete data[bd]
} else if (data[bd] === true) {
data[bd] = Object.keys(data.dependencies || {})
} else if (data[bd] !== undefined && !Array.isArray(data[bd])) {
delete data[bd]
}
return cb(null, data)
}
function githead (file, data, cb) {
if (data.gitHead) {
return cb(null, data)
}
var dir = path.dirname(file)
var head = path.resolve(dir, '.git/HEAD')
fs.readFile(head, 'utf8', function (er, headData) {
if (er) {
var parent = path.dirname(dir)
if (parent === dir) {
return cb(null, data)
}
return githead(dir, data, cb)
}
githead_(data, dir, headData, cb)
})
}
function githead_ (data, dir, head, cb) {
if (!head.match(/^ref: /)) {
data.gitHead = head.trim()
return cb(null, data)
}
var headRef = head.replace(/^ref: /, '').trim()
var headFile = path.resolve(dir, '.git', headRef)
fs.readFile(headFile, 'utf8', function (er, headData) {
if (er || !headData) {
var packFile = path.resolve(dir, '.git/packed-refs')
return fs.readFile(packFile, 'utf8', function (readFileErr, refs) {
if (readFileErr || !refs) {
return cb(null, data)
}
refs = refs.split('\n')
for (var i = 0; i < refs.length; i++) {
var match = refs[i].match(/^([0-9a-f]{40}) (.+)$/)
if (match && match[2].trim() === headRef) {
data.gitHead = match[1]
break
}
}
return cb(null, data)
})
}
headData = headData.replace(/^ref: /, '').trim()
data.gitHead = headData
return cb(null, data)
})
}
/**
* Warn if the bin references don't point to anything. This might be better in
* normalize-package-data if it had access to the file path.
*/
function checkBinReferences_ (file, data, warn, cb) {
if (!(data.bin instanceof Object)) {
return cb()
}
var keys = Object.keys(data.bin)
var keysLeft = keys.length
if (!keysLeft) {
return cb()
}
function handleExists (relName, result) {
keysLeft--
if (!result) {
warn('No bin file found at ' + relName)
}
if (!keysLeft) {
cb()
}
}
keys.forEach(function (key) {
var dirName = path.dirname(file)
var relName = data.bin[key]
/* istanbul ignore if - impossible, bins have been normalized */
if (typeof relName !== 'string') {
var msg = 'Bin filename for ' + key +
' is not a string: ' + util.inspect(relName)
warn(msg)
delete data.bin[key]
handleExists(relName, true)
return
}
var binPath = path.resolve(dirName, relName)
fs.stat(binPath, (err) => handleExists(relName, !err))
})
}
function final (file, data, log, strict, cb) {
var pId = makePackageId(data)
function warn (msg) {
if (typoWarned[pId]) {
return
}
if (log) {
log('package.json', pId, msg)
}
}
try {
normalizeData(data, warn, strict)
} catch (error) {
return cb(error)
}
checkBinReferences_(file, data, warn, function () {
typoWarned[pId] = true
cb(null, data)
})
}
function fillTypes (file, data, cb) {
var index = data.main ? data.main : 'index.js'
if (typeof index !== 'string') {
return cb(new TypeError('The "main" attribute must be of type string.'))
}
// TODO exports is much more complicated than this in verbose format
// We need to support for instance
// "exports": {
// ".": [
// {
// "default": "./lib/npm.js"
// },
// "./lib/npm.js"
// ],
// "./package.json": "./package.json"
// },
// as well as conditional exports
// if (data.exports && typeof data.exports === 'string') {
// index = data.exports
// }
// if (data.exports && data.exports['.']) {
// index = data.exports['.']
// if (typeof index !== 'string') {
// }
// }
var extless =
path.join(path.dirname(index), path.basename(index, path.extname(index)))
var dts = `./${extless}.d.ts`
var dtsPath = path.join(path.dirname(file), dts)
var hasDTSFields = 'types' in data || 'typings' in data
if (!hasDTSFields && fs.existsSync(dtsPath)) {
data.types = dts.split(path.sep).join('/')
}
cb(null, data)
}
function makePackageId (data) {
var name = cleanString(data.name)
var ver = cleanString(data.version)
return name + '@' + ver
}
function cleanString (str) {
return (!str || typeof (str) !== 'string') ? '' : str.trim()
}
// /**package { "name": "foo", "version": "1.2.3", ... } **/
function parseIndex (data) {
data = data.split(/^\/\*\*package(?:\s|$)/m)
if (data.length < 2) {
return null
}
data = data[1]
data = data.split(/\*\*\/$/m)
if (data.length < 2) {
return null
}
data = data[0]
data = data.replace(/^\s*\*/mg, '')
try {
return safeJSON(data)
} catch (er) {
return null
}
}
function parseError (ex, file) {
var e = new Error('Failed to parse json\n' + ex.message)
e.code = 'EJSONPARSE'
e.path = file
return e
}

View File

@@ -0,0 +1,25 @@
Copyright 2017 Kat Marchán
Copyright npm, Inc.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
---
This library is a fork of 'better-json-errors' by Kat Marchán, extended and
distributed under the terms of the MIT license above.

View File

@@ -0,0 +1,96 @@
# json-parse-even-better-errors
[`json-parse-even-better-errors`](https://github.com/npm/json-parse-even-better-errors)
is a Node.js library for getting nicer errors out of `JSON.parse()`,
including context and position of the parse errors.
It also preserves the newline and indentation styles of the JSON data, by
putting them in the object or array in the `Symbol.for('indent')` and
`Symbol.for('newline')` properties.
## Install
`$ npm install --save json-parse-even-better-errors`
## Table of Contents
* [Example](#example)
* [Features](#features)
* [Contributing](#contributing)
* [API](#api)
* [`parse`](#parse)
### Example
```javascript
const parseJson = require('json-parse-even-better-errors')
parseJson('"foo"') // returns the string 'foo'
parseJson('garbage') // more useful error message
parseJson.noExceptions('garbage') // returns undefined
```
### Features
* Like JSON.parse, but the errors are better.
* Strips a leading byte-order-mark that you sometimes get reading files.
* Has a `noExceptions` method that returns undefined rather than throwing.
* Attaches the newline character(s) used to the `Symbol.for('newline')`
property on objects and arrays.
* Attaches the indentation character(s) used to the `Symbol.for('indent')`
property on objects and arrays.
## Indentation
To preserve indentation when the file is saved back to disk, use
`data[Symbol.for('indent')]` as the third argument to `JSON.stringify`, and
if you want to preserve windows `\r\n` newlines, replace the `\n` chars in
the string with `data[Symbol.for('newline')]`.
For example:
```js
const txt = await readFile('./package.json', 'utf8')
const data = parseJsonEvenBetterErrors(txt)
const indent = Symbol.for('indent')
const newline = Symbol.for('newline')
// .. do some stuff to the data ..
const string = JSON.stringify(data, null, data[indent]) + '\n'
const eolFixed = data[newline] === '\n' ? string
: string.replace(/\n/g, data[newline])
await writeFile('./package.json', eolFixed)
```
Indentation is determined by looking at the whitespace between the initial
`{` and `[` and the character that follows it. If you have lots of weird
inconsistent indentation, then it won't track that or give you any way to
preserve it. Whether this is a bug or a feature is debatable ;)
### API
#### <a name="parse"></a> `parse(txt, reviver = null, context = 20)`
Works just like `JSON.parse`, but will include a bit more information when
an error happens, and attaches a `Symbol.for('indent')` and
`Symbol.for('newline')` on objects and arrays. This throws a
`JSONParseError`.
#### <a name="parse"></a> `parse.noExceptions(txt, reviver = null)`
Works just like `JSON.parse`, but will return `undefined` rather than
throwing an error.
#### <a name="jsonparseerror"></a> `class JSONParseError(er, text, context = 20, caller = null)`
Extends the JavaScript `SyntaxError` class to parse the message and provide
better metadata.
Pass in the error thrown by the built-in `JSON.parse`, and the text being
parsed, and it'll parse out the bits needed to be helpful.
`context` defaults to 20.
Set a `caller` function to trim internal implementation details out of the
stack trace. When calling `parseJson`, this is set to the `parseJson`
function. If not set, then the constructor defaults to itself, so the
stack trace will point to the spot where you call `new JSONParseError`.

View File

@@ -0,0 +1,129 @@
'use strict'
const hexify = char => {
const h = char.charCodeAt(0).toString(16).toUpperCase()
return '0x' + (h.length % 2 ? '0' : '') + h
}
const parseError = (e, txt, context) => {
if (!txt) {
return {
message: e.message + ' while parsing empty string',
position: 0,
}
}
const badToken = e.message.match(/^Unexpected token (.) .*position\s+(\d+)/i)
const errIdx = badToken ? +badToken[2]
: e.message.match(/^Unexpected end of JSON.*/i) ? txt.length - 1
: null
const msg = badToken ? e.message.replace(/^Unexpected token ./, `Unexpected token ${
JSON.stringify(badToken[1])
} (${hexify(badToken[1])})`)
: e.message
if (errIdx !== null && errIdx !== undefined) {
const start = errIdx <= context ? 0
: errIdx - context
const end = errIdx + context >= txt.length ? txt.length
: errIdx + context
const slice = (start === 0 ? '' : '...') +
txt.slice(start, end) +
(end === txt.length ? '' : '...')
const near = txt === slice ? '' : 'near '
return {
message: msg + ` while parsing ${near}${JSON.stringify(slice)}`,
position: errIdx,
}
} else {
return {
message: msg + ` while parsing '${txt.slice(0, context * 2)}'`,
position: 0,
}
}
}
class JSONParseError extends SyntaxError {
constructor (er, txt, context, caller) {
context = context || 20
const metadata = parseError(er, txt, context)
super(metadata.message)
Object.assign(this, metadata)
this.code = 'EJSONPARSE'
this.systemError = er
Error.captureStackTrace(this, caller || this.constructor)
}
get name () {
return this.constructor.name
}
set name (n) {}
get [Symbol.toStringTag] () {
return this.constructor.name
}
}
const kIndent = Symbol.for('indent')
const kNewline = Symbol.for('newline')
// only respect indentation if we got a line break, otherwise squash it
// things other than objects and arrays aren't indented, so ignore those
// Important: in both of these regexps, the $1 capture group is the newline
// or undefined, and the $2 capture group is the indent, or undefined.
const formatRE = /^\s*[{[]((?:\r?\n)+)([\s\t]*)/
const emptyRE = /^(?:\{\}|\[\])((?:\r?\n)+)?$/
const parseJson = (txt, reviver, context) => {
const parseText = stripBOM(txt)
context = context || 20
try {
// get the indentation so that we can save it back nicely
// if the file starts with {" then we have an indent of '', ie, none
// otherwise, pick the indentation of the next line after the first \n
// If the pattern doesn't match, then it means no indentation.
// JSON.stringify ignores symbols, so this is reasonably safe.
// if the string is '{}' or '[]', then use the default 2-space indent.
const [, newline = '\n', indent = ' '] = parseText.match(emptyRE) ||
parseText.match(formatRE) ||
[null, '', '']
const result = JSON.parse(parseText, reviver)
if (result && typeof result === 'object') {
result[kNewline] = newline
result[kIndent] = indent
}
return result
} catch (e) {
if (typeof txt !== 'string' && !Buffer.isBuffer(txt)) {
const isEmptyArray = Array.isArray(txt) && txt.length === 0
throw Object.assign(new TypeError(
`Cannot parse ${isEmptyArray ? 'an empty array' : String(txt)}`
), {
code: 'EJSONPARSE',
systemError: e,
})
}
throw new JSONParseError(e, parseText, context, parseJson)
}
}
// Remove byte order marker. This catches EF BB BF (the UTF-8 BOM)
// because the buffer-to-string conversion in `fs.readFileSync()`
// translates it to FEFF, the UTF-16 BOM.
const stripBOM = txt => String(txt).replace(/^\uFEFF/, '')
module.exports = parseJson
parseJson.JSONParseError = JSONParseError
parseJson.noExceptions = (txt, reviver) => {
try {
return JSON.parse(stripBOM(txt), reviver)
} catch (e) {
// no exceptions
}
}

View File

@@ -0,0 +1,48 @@
{
"name": "json-parse-even-better-errors",
"version": "3.0.0",
"description": "JSON.parse with context information on error",
"main": "lib/index.js",
"files": [
"bin/",
"lib/"
],
"scripts": {
"test": "tap",
"snap": "tap",
"lint": "eslint \"**/*.js\"",
"postlint": "template-oss-check",
"template-oss-apply": "template-oss-apply --force",
"lintfix": "npm run lint -- --fix",
"posttest": "npm run lint"
},
"repository": {
"type": "git",
"url": "https://github.com/npm/json-parse-even-better-errors.git"
},
"keywords": [
"JSON",
"parser"
],
"author": "GitHub Inc.",
"license": "MIT",
"devDependencies": {
"@npmcli/eslint-config": "^3.1.0",
"@npmcli/template-oss": "4.5.1",
"tap": "^16.3.0"
},
"tap": {
"check-coverage": true,
"nyc-arg": [
"--exclude",
"tap-snapshots/**"
]
},
"engines": {
"node": "^14.17.0 || ^16.13.0 || >=18.0.0"
},
"templateOSS": {
"//@npmcli/template-oss": "This file is partially managed by @npmcli/template-oss. Edits may be overwritten.",
"version": "4.5.1"
}
}

58
front/app/node_modules/read-package-json/package.json generated vendored Normal file
View File

@@ -0,0 +1,58 @@
{
"name": "read-package-json",
"version": "6.0.0",
"author": "GitHub Inc.",
"description": "The thing npm uses to read package.json files with semantics and defaults and validation",
"repository": {
"type": "git",
"url": "https://github.com/npm/read-package-json.git"
},
"main": "lib/read-json.js",
"scripts": {
"prerelease": "npm t",
"postrelease": "npm publish && git push --follow-tags",
"release": "standard-version -s",
"test": "tap",
"npmclilint": "npmcli-lint",
"lint": "eslint \"**/*.js\"",
"lintfix": "npm run lint -- --fix",
"posttest": "npm run lint",
"postsnap": "npm run lintfix --",
"postlint": "template-oss-check",
"snap": "tap",
"template-oss-apply": "template-oss-apply --force"
},
"dependencies": {
"glob": "^8.0.1",
"json-parse-even-better-errors": "^3.0.0",
"normalize-package-data": "^5.0.0",
"npm-normalize-package-bin": "^3.0.0"
},
"devDependencies": {
"@npmcli/eslint-config": "^4.0.0",
"@npmcli/template-oss": "4.5.1",
"tap": "^16.0.1"
},
"license": "ISC",
"files": [
"bin/",
"lib/"
],
"engines": {
"node": "^14.17.0 || ^16.13.0 || >=18.0.0"
},
"tap": {
"branches": 68,
"functions": 83,
"lines": 76,
"statements": 77,
"nyc-arg": [
"--exclude",
"tap-snapshots/**"
]
},
"templateOSS": {
"//@npmcli/template-oss": "This file is partially managed by @npmcli/template-oss. Edits may be overwritten.",
"version": "4.5.1"
}
}