Source Engine is the firewall against misinformation.
Initially the Source Engine will provide 4 answers.
Without writing comments for hours.
Without emotionally straining, unfruitful confrontations.
Textparts are highlighted if the evidence is significant (see the example below).
Source Engine works on top of any website, either with a browser add-on or server-based.
"You should stop eating sugar, because sugar is unhealthy."
— any health website
True for: Professional athlet 45, female, one day before contest?
Scientific data might only be available for 33 years old male students/PhDs. One generalizing statement can be completely wrong for one person and right for the other.
Disclaimer: All examples are for demonstration only. This is no medical advice.
We compare text strings (video transcripts and images planned) to a database of empirical data.
Input:sugar is unhealthy
Disclaimer: All examples are for demonstration only. This is no medical advice. Sources will be clickable in a real demo.
Factual content is not so much about who found it, nor when someone found it. We create a timeline that shows the connections between similar ideas at different times by different people. This allows us to trace back evidence and how it spread.
Our model matches real scientific sources to any given statement (WIP). It extracts the core evidence of papers and connects them semantically. By adding an available LLM we compare the relevance we can quickly output clean statements with citations of backing sources.
We believe that a hybrid approach between "AI" and humans is more powerful than each alone. Our goal is to align the incentives of science-fame with providing insights for society. To continuously train our model, we work with science-based partners (see Team).
Misinformation usually spreads much further than facts, because it is sensational. Something new grabs attention. That curiosity is mostly a good thing but the human mind can be hacked by clickbait and other techniques. Not everyone has the same capability to detect it. Reading the original papers to every information online is not a very practical approach.
Society pays a huge price for all the fake information being spread. There are medical, societal and economic costs, in addition to all the preventable suffering of individuals. Worldwide estimates are in the high billions (estimate), but thinking about an affected person in our life is enough to imagine the consequences.
While it is usually possible to trace back most sources with enough research, it can be very time intense. Quality journalists spend a good amount of their efforts verifying sources. By adding the available evidence at the point where it is needed, everyone is faster. Let us use our collective intelligence.
Almost 40 stakeholder interviews showed incredible positive insights. Our browser add-on (closed beta) highlights text and the backend (PoC) allows to rate sources. But that's only the beginning. Next we need:
Sign up for irregular announcements.
UPDATE March 2023: We had to put this project "on hold" (probably forever) since we could not find a working business model to sustain the idea longterm. It's a social service to protect citizens from fake news. Most people don't want to pay to be (partially) wrong all the time ;-)
November 2023: Multiple AI-Startups are trying to develop reliable, science-based source searches. We will not continue the Source-Engine until someone with a similar base-technology succeeds. If they will licence their technology to us, we might re-evaluate our decision - and enable a more fact based web.