AI Models
AI models predict based on their training data. They can work in any domain such as numbers, text or multimedia.
World Foundation Models: 10 Use Cases
Training robots and autonomous vehicles (AVs) in the physical world can be costly, time-consuming and risky. World Foundation Models offer a scalable alternative by enabling realistic simulations of real-world environments. These models accelerate development and deployment in robotics, AVs, and other domains by reducing reliance on physical testing.
Time Series Foundation Models: Use Cases & Benefits
Time series foundation models (TSFMs) build on advances in foundation models from natural language processing and vision. Using transformer-based architectures and large-scale training data, they achieve zero-shot performance and adapt across sectors such as finance, retail, energy, and healthcare.
Compare Relational Foundation Models
We benchmarked SAP-RPT-1-OSS against gradient boosting (LightGBM, CatBoost) on 17 tabular datasets spanning the semantic-numeral spectrum, small/high-semantic tables, mixed business datasets, and large low-semantic numerical datasets. Our goal is to measure where a relational LLM’s pretrained semantic priors may provide advantages over traditional tree models and where they face challenges under scale or low-semantic structure.
Tabular Models Benchmark: Performance Across 19 Datasets 2026
We benchmarked 7 widely used tabular learning models across 19 real-world datasets, covering ~260,000 samples and over 250 total features, with dataset sizes ranging from 435 to nearly 49,000 rows. Our goal was to understand top-performing model families for datasets of different sizes and structure (e.g. numeric vs.