AI Models
AI models predict based on their training data. They can work in any domain such as numbers, text or multimedia.
Vision Language Models Compared to Image Recognition
Can advanced Vision Language Models (VLMs) replace traditional image recognition models? To find out, we benchmarked 16 leading models across three paradigms: traditional CNNs (ResNet, EfficientNet), VLMs ( such as GPT-4.1, Gemini 2.5), and Cloud APIs (AWS, Google, Azure).
World Foundation Models: 10 Use Cases
Training robots and autonomous vehicles (AVs) in the physical world can be costly, time-consuming and risky. World Foundation Models offer a scalable alternative by enabling realistic simulations of real-world environments. These models accelerate development and deployment in robotics, AVs, and other domains by reducing reliance on physical testing.
Time Series Foundation Models: Use Cases & Benefits
Time series foundation models (TSFMs) build on advances in foundation models from natural language processing and vision. Using transformer-based architectures and large-scale training data, they achieve zero-shot performance and adapt across sectors such as finance, retail, energy, and healthcare.
Compare Relational Foundation Models
We benchmarked SAP-RPT-1-OSS against gradient boosting (LightGBM, CatBoost) on 17 tabular datasets spanning the semantic-numeral spectrum, small/high-semantic tables, mixed business datasets, and large low-semantic numerical datasets. Our goal is to measure where a relational LLM’s pretrained semantic priors may provide advantages over traditional tree models and where they face challenges under scale or low-semantic structure.