Dither AI is building the transformer time-series modality for multimodal AI models.
Transformers have enabled an AI revolution by enabling researchers to drastically increase model size without former model limitations. The design powers recent advances for AI products (OpenAI, Anthropic, Google). The limitations of the transformer design is the requirement for large amounts of high quality data.
Notice we did not say large language model (LLM) in the above paragraph. Language is a single modality of the multimodal products released from the leading companies. A modality is a representation of data - images, text, audio, etc. You can train an LLM (text modality) or a vision model (image modality). You can also train a model on both text and images to get a multi-modal model.
Time-series can be considered another modality. It has its own peculiarities because of the transformer architecture - this makes training difficult. However, we have ample evidence to believe that a time-series modality is both possible and could be competitive with traditional analysis techniques with a simpler interface. This potential is what we are pursuing and we have already proven more effective than Google and Amazon in this domain (link).
We are at a turning point with AI agents and humans. Humans right now can benefit from AI technologies by increasing their output. Yet, we will slowly abstract that human input across a variety of fields. The AI that will take the low level human work are called agents. The definition is fuzzy but it is undeniable they will need multiple modalities in order to interpret the world. They need to understand text. They need to understand audio. They need to understand images.
These agents will need to understand how to make data driven predictions based on real world feedback - the time series modality.
Right now, time-series is the least researched modality. Dither AI is at the forefront. The time is now to develop this technology. We are implementing real tools in order to test the effectiveness of our models in a real-world setting. We are designing these tools for both humans and agents.
When we began early 2024, publicly available time-series data was more scarce. We committed to Solana to use the rich data. It also supplied a test case. We knew that transformers required large amounts of data and that is what Solana afforded.
If Dither AI is successful then it will expand beyond the current cryptocurrency paradigm. Try to imagine a model that does to data analytics what ChatGPT did to natural language. We would make it so anyone can perform AI driven analysis and forecasting.
We are taking a moonshot because making the world 1.00001x more efficient is a billion dollar opportunity.
Vibe Check: Test Our AI Assessment Demo
Telegram: Join Our Telegram Community
Twitter: Our Main Twitter Account
Token: Swap for $DITH on Jupiter
Demos: Explore our Demos and API
History: How We Got Here
AI Thesis: Our View on AI
Hypothesis: All Our Current Hypothesis
Design Thesis: Paul and Derek Inspired