Skip to main content


What is a source file?

A source file is a json or javascript file that represents a website. These files are generated from the user and guide Saffron on how to parse a website.

Each parser utilizes a different source file structure but there are some common options.

Common options


This field identifies the source file. Although saffron does not check if the name is unique it is required to be so.

It is also used in configuration includeOnly and exclude.


Default value: name

When requesting (or saving) articles from (to) the database, saffron will send the tableName as the path to where the articles are located. This is useful in case of multiple source files want to save at the same place.

If it is not defined it will fallback to the source name field.


Default value: 3600000

The time the between the jobs that are issued for this source file. For example, if it is scrapped at 4 AM then the next job will be issued for 5 AM.

Make note that saffron will add an offset of maximum 500 seconds.


This option will override the configuration option scheduler.jobsInterval


Default value: configuration.scheduler.jobsInterval / 2

The interval where a source scrapping job will be reissued in case of failure.


Default value:

The timeframe where Saffron will wait to get a response from an url. If this amount is exceeded it will throw a parser error.


This option will override the configuration option jobs.timeout


Default value: configuration.workers.articles.amount

The amount of articles a parser will return.


This option will override the configuration option articles.amount


Default value: false

If is set to true it will ignore all TLS certificates. It is useful in cases where a website did not update their certificates.


This field will stay intact with whatever you put inside. It allows the user to pass custom information about the source file. It can be used like this:

saffron.on('event', articles => {
const extra = articles.getSources().extra;
if(extra) {
// ...


This field contains the url(s) where the news are displayed. It can be a string or an array.:

In case of one url it can be used like:

url: ""
url: [""]
url: [[""]]

In case a website has more than one sub-sites where it displays its news, multiple urls can be used. In that case the scrape options will be applied to all the urls, and it can be used like:

url: [
url: [

If you want to identify in which url an article was found you can use the categories option before the url. It will add these categories alongside the provided url at the categories field of the article.

url: [
["News", ""],
["Annoucements", "Other category name", ""]


The type of parser that will be used during the scrapping. For more details read about parsers.


The encoding of the website.


The User-Agent that will accompany the request.


This field contains all the scrape options needed by the specified parser. You can check the scrape formats for each parser: WordPress, RSS, HTML or Dynamic.


"name": "",
"tableName": "",
"interval": 3600000,
"retryInterval": 1800000,
"timeout": 10000,
"amount": 10,
"ignoreCertificates": false,
"extra": {
"key": "value",
// ...
"url": [
"type": "html",
"scrape": {
// ...