Contact Us
No results found.

AI Models

AI models predict based on their training data. They can work in any domain such as numbers, text or multimedia.

Explore AI Models

Vision Language Models Compared to Image Recognition

AI ModelsFeb 27

Can advanced Vision Language Models (VLMs) replace traditional image recognition models? To find out, we benchmarked 16 leading models across three paradigms: traditional CNNs (ResNet, EfficientNet), VLMs ( such as GPT-4.1, Gemini 2.5), and Cloud APIs (AWS, Google, Azure).

Read More
AI ModelsFeb 11

World Foundation Models: 10 Use Cases

Training robots and autonomous vehicles (AVs) in the physical world can be costly, time-consuming and risky. World Foundation Models offer a scalable alternative by enabling realistic simulations of real-world environments. These models accelerate development and deployment in robotics, AVs, and other domains by reducing reliance on physical testing.

AI ModelsFeb 10

Time Series Foundation Models: Use Cases & Benefits

Time series foundation models (TSFMs) build on advances in foundation models from natural language processing and vision. Using transformer-based architectures and large-scale training data, they achieve zero-shot performance and adapt across sectors such as finance, retail, energy, and healthcare.

AI ModelsFeb 4

Compare Relational Foundation Models

We benchmarked SAP-RPT-1-OSS against gradient boosting (LightGBM, CatBoost) on 17 tabular datasets spanning the semantic-numeral spectrum, small/high-semantic tables, mixed business datasets, and large low-semantic numerical datasets. Our goal is to measure where a relational LLM’s pretrained semantic priors may provide advantages over traditional tree models and where they face challenges under scale or low-semantic structure.

AI ModelsJan 29

Tabular Models Benchmark: Performance Across 19 Datasets 2026

We benchmarked 7 widely used tabular learning models across 19 real-world datasets, covering ~260,000 samples and over 250 total features, with dataset sizes ranging from 435 to nearly 49,000 rows. Our goal was to understand top-performing model families for datasets of different sizes and structure (e.g. numeric vs.