Data Science Developer Tools 1 min read

WrenAI: The Revolutionary GenBI Agent Transforming Database Queries

B
Bright Coding
Author
Share:
WrenAI: The Revolutionary GenBI Agent Transforming Database Queries
Advertisement

Stop writing SQL. Start talking to your data. That’s the promise behind WrenAI, the open-source GenBI powerhouse that’s making waves across developer communities. In a world where data drives every decision, the bottleneck has always been the same: someone needs to know SQL. WrenAI demolishes that barrier, turning plain English (or any language) into precise SQL queries, beautiful charts, and actionable insights in seconds.

If you’ve ever waited three days for an analyst to pull a "simple" report, or if you’re a developer drowning in ad-hoc data requests, this tool will change your life. Built by the team at Canner, WrenAI leverages cutting-edge large language models to create what they call "Generative Business Intelligence"—a new paradigm where AI doesn’t just assist with analytics, it becomes your analytics engine.

In this deep dive, we’ll explore everything WrenAI offers, from its intelligent semantic layer to its powerful embedding API. You’ll see real code examples, step-by-step installation guides, and discover why 1,300+ developers have already joined their Discord community. Ready to 10x your data workflow? Let’s jump in.

What is WrenAI and Why is Everyone Talking About It?

WrenAI is an open-source GenBI (Generative Business Intelligence) agent that transforms natural language into accurate SQL queries, dynamic visualizations, and comprehensive business insights. Created by Canner, a company with deep expertise in data virtualization and analytics, WrenAI represents a fundamental shift in how we interact with databases.

At its core, WrenAI solves a universal pain point: the SQL knowledge gap. While data has become democratized, data access hasn’t. Business users have questions. Developers have answers buried in databases. But the translation layer—SQL—creates a frustrating bottleneck. WrenAI eliminates this friction by acting as an intelligent intermediary between human questions and database answers.

The tool leverages state-of-the-art LLMs to perform three critical transformations: Text-to-SQL, Text-to-Chart, and Text-to-Insight. This isn’t just another chatbot wrapper around a database. WrenAI implements a sophisticated semantic layer that encodes your schema, metrics, and relationships into what they call MDL (Model Definition Language) models. This layer ensures the LLM understands your business context, not just your table structures.

Why is it trending now? The convergence of three factors: LLM maturity, developer fatigue with traditional BI, and the explosion of data sources. As teams adopt multiple databases (Postgres for transactional, Snowflake for warehousing, ClickHouse for analytics), the complexity multiplies. WrenAI’s universal connector approach resonates deeply. Its GitHub repository has exploded in popularity, earning Trendshift badges and sparking passionate discussions about the future of analytics.

The project’s open-source nature is strategic. While Canner offers a managed cloud service, the core technology is freely available, auditable, and extensible. This transparency builds trust in an era where data security is paramount. With support for 12+ major databases and 10+ LLM providers, WrenAI isn’t just a tool—it’s a platform for building AI-native analytics experiences.

Key Features That Make WrenAI Unstoppable

WrenAI packs a punch with four killer features that work together to create a seamless GenBI experience. Let’s break down what you get and why it matters for your workflow.

Talk to Your Data in Any Language

This is the headline feature that stops developers in their tracks. Ask "What were our top 5 selling products last quarter?" and WrenAI instantly generates the precise SQL query. But here’s the magic: it works in any language. Spanish, Mandarin, Japanese—your business users can query in their native tongue while WrenAI handles the translation to database-speak.

The technical sophistication here is profound. WrenAI doesn’t just pattern-match keywords. It uses few-shot learning and schema-aware prompting to understand intent. The system analyzes your MDL models to grasp table relationships, metric definitions, and business logic before generating SQL. This slashes the SQL learning curve to zero while maintaining accuracy that rivals senior data analysts.

GenBI Insights: AI That Actually Understands Your Business

Getting a SQL result is one thing. Getting a decision-ready insight is another. WrenAI’s GenBI engine doesn’t stop at queries—it writes natural language summaries, generates appropriate charts automatically, and compiles executive-ready reports. Imagine asking about churn trends and receiving not just numbers, but a narrative analysis with visualizations explaining what’s happening and why it matters.

The system intelligently selects chart types based on data structure. Time series become line charts. Categorical comparisons become bar charts. Correlations trigger scatter plots. Each visualization is optimized for clarity and impact, saving you hours of manual dashboard building.

Semantic Layer: The Secret Sauce for Accuracy

Here’s where WrenAI leaves competitors in the dust. The semantic layer acts as a governance and accuracy backbone. You define your business metrics, table relationships, and calculated fields in MDL (Model Definition Language) models. This becomes the LLM’s instruction manual, ensuring it never hallucinates joins or misinterprets column meanings.

Think of it as a type system for analytics. Instead of hoping the LLM guesses correctly, you explicitly define that "revenue" equals SUM(order_total) WHERE status='completed' and that "customers" join to "orders" on customer_id. This layer keeps outputs accurate, governed, and consistent across your entire organization. It’s the difference between a demo that works sometimes and a production system you can trust.

Embed via API: Build AI-Native Analytics Into Anything

WrenAI isn’t just a standalone tool—it’s a building block for the next generation of data products. The REST API lets you generate queries and charts inside your own applications. Build custom analytics agents, add natural language querying to your SaaS platform, or create intelligent Slack bots that answer data questions.

The API is already powering real applications. Check out their Streamlit Live Demo to see it in action. With comprehensive documentation and SDK support, you can go from idea to production integration in hours, not weeks. This transforms WrenAI from a tool into a platform for innovation.

Real-World Use Cases Where WrenAI Dominates

Let’s get concrete. Where does WrenAI actually move the needle? These five scenarios show its transformative power across different roles and industries.

Marketing Teams Finally Unlock Customer Data

Your marketing manager needs to segment users by LTV and campaign source for a retention analysis. Normally, this means a Jira ticket, three days of back-and-forth, and a frustrated analyst. With WrenAI, they simply ask: "Show me average lifetime value by acquisition channel for users who signed up in 2024."

WrenAI generates the complex SQL with proper joins across your users, orders, and campaign tables. It automatically creates a bar chart comparing channels and highlights that email campaigns have 40% higher LTV than social. The marketer gets actionable insights in 30 seconds, not three days. The analytics team gets their life back.

Sales Leaders Get Real-Time Pipeline Intelligence

Sales directors live and die by pipeline metrics, but Salesforce reports are rigid and data warehouses feel inaccessible. A sales leader can ask WrenAI: "What’s our pipeline coverage ratio by region this quarter, and how does it compare to last?"

WrenAI queries your CRM database, calculates coverage ratios, generates a trend line chart, and even flags that the APAC region is 30% behind target. The semantic layer ensures "pipeline" is correctly defined as SUM(deal_value) WHERE stage NOT IN ('Closed Lost'). No more waiting for weekly reports—insights happen in the moment.

Product Managers Analyze User Behavior at Scale

Understanding feature adoption requires digging into event streams—a nightmare of complex window functions and subqueries. A PM asks: "Show me the 7-day retention rate for users who tried the new AI feature versus those who didn’t."

WrenAI handles the cohort analysis automatically, generating the sophisticated SQL to track user behavior over time. It produces a retention curve visualization that clearly shows the AI feature boosts week-1 retention by 15%. Product decisions become data-driven by default, not by exception.

Customer Support Operations Optimize in Real-Time

Support managers need to track ticket volume, resolution time, and agent performance across multiple systems. With WrenAI, they ask: "What’s our average first response time by priority tier this week, and which agents are performing best?"

The system joins ticket, agent, and response tables, calculates SLAs, and generates a heatmap showing performance patterns. When the semantic layer defines "first response time" as MIN(response_time) GROUP BY ticket_id, you get consistent, trustworthy metrics across the entire support organization.

SaaS Platforms Embed AI Analytics for Customers

You’re building a logistics platform and want to offer customers insights on their shipping data. Instead of building a full BI stack, you embed WrenAI’s API. Your customers ask natural language questions in your app’s analytics section, and WrenAI returns queries and visualizations branded and filtered to their data only.

This turns analytics from a cost center into a revenue-driving feature. The semantic layer ensures data isolation and security, while the API handles the heavy lifting. You ship in weeks what used to take quarters.

Step-by-Step Installation & Setup Guide

Ready to try WrenAI? The team claims you can be up and running in 3 minutes. Let’s put that to the test with this comprehensive setup guide.

Prerequisites

Before starting, ensure you have:

  • Docker and Docker Compose installed (v20.10+)
  • Access to one of the supported databases
  • An API key from your preferred LLM provider (OpenAI recommended for best results)
  • At least 4GB of available RAM

Step 1: Clone the Repository

Open your terminal and clone the WrenAI repository:

git clone https://github.com/Canner/WrenAI.git
cd WrenAI

Step 2: Configure Your Environment

Copy the environment template and edit it with your LLM credentials:

cp .env.example .env
# Edit .env with your favorite editor
nano .env

In the .env file, add your LLM API key:

# For OpenAI
OPENAI_API_KEY=sk-your-api-key-here

# Or for Azure OpenAI
AZURE_OPENAI_API_KEY=your-azure-key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com

Step 3: Launch WrenAI Services

Run the Docker Compose command to start all services:

docker compose up -d

This spins up four containers:

  • wren-ui: The web interface (port 3000)
  • wren-ai-service: The core AI engine
  • wren-engine: The query execution layer
  • wren-bootstrap: Initial configuration service

Step 4: Connect Your Database

Navigate to http://localhost:3000 in your browser. The setup wizard will guide you through:

  1. Selecting your database type (PostgreSQL, Snowflake, etc.)
  2. Entering connection credentials
  3. Testing the connection
  4. Choosing which tables to include in your semantic layer

Step 5: Define Your Semantic Layer

WrenAI will automatically scan your schema and generate a baseline MDL model. You can refine it by:

  • Adding calculated fields and metrics
  • Defining relationships between tables
  • Setting display names for business users
  • Creating custom categories

Step 6: Ask Your First Question

That’s it! Click "New Question" and type: "Show me total revenue by month for 2024." WrenAI will generate the SQL, execute it, and display a beautiful line chart within seconds.

For cloud deployment or advanced configurations, check the official installation docs.

Real Code Examples from WrenAI

Let’s examine actual implementation patterns using WrenAI’s configuration and API. These examples demonstrate production-ready usage.

Example 1: Docker Compose Configuration

The docker-compose.yml file orchestrates WrenAI’s microservices architecture:

version: '3.8'
services:
  wren-ui:
    image: ghcr.io/canner/wren-ui:latest
    ports:
      - "3000:3000"
    environment:
      - WREN_ENGINE_ENDPOINT=http://wren-engine:8080
      - WREN_AI_SERVICE_ENDPOINT=http://wren-ai-service:8000
    depends_on:
      - wren-engine
      - wren-ai-service

  wren-ai-service:
    image: ghcr.io/canner/wren-ai-service:latest
    environment:
      - LLM_PROVIDER=openai
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - WREN_ENGINE_ENDPOINT=http://wren-engine:8080
    volumes:
      - ./config:/app/config  # Mount custom config files

  wren-engine:
    image: ghcr.io/canner/wren-engine:latest
    environment:
      - DB_TYPE=postgres
      - DB_HOST=${DB_HOST}
      - DB_PORT=${DB_PORT}
      - DB_USER=${DB_USER}
      - DB_PASSWORD=${DB_PASSWORD}
      - DB_NAME=${DB_NAME}

Explanation: This configuration deploys three core services. The UI service exposes port 3000 for browser access. The AI service handles LLM interactions and requires your API key. The engine service connects to your database using environment variables. The volume mount lets you inject custom MDL models.

Example 2: PostgreSQL Connection Configuration

Here’s a real configuration snippet for connecting to PostgreSQL, adapted from their config examples:

# config/data_source/postgres.yaml
type: postgres
host: localhost
port: 5432
database: analytics_db
user: wren_user
password: ${POSTGRES_PASSWORD}  # Use environment variable for security
sslmode: prefer

tables:
  - schema: public
    name: users
    columns:
      - name: id
        type: integer
        primary_key: true
      - name: email
        type: varchar
        is_email: true  # Semantic hint for LLM
      - name: created_at
        type: timestamp
        is_time_dimension: true

  - schema: public
    name: orders
    columns:
      - name: id
        type: integer
        primary_key: true
      - name: user_id
        type: integer
        foreign_key: users.id  # Explicit relationship
      - name: total
        type: decimal
        is_metric: true  # Mark as business metric
        description: "Total order value in USD"

Explanation: This YAML file defines your semantic layer. The is_email and is_time_dimension flags guide the LLM to generate smarter queries. The foreign_key relationship prevents join errors. Marking total as a metric with a description ensures consistent business logic.

Example 3: Natural Language Query via API

Programmatically query your data using WrenAI’s REST API:

import requests
import json

# Initialize the query
response = requests.post(
    "http://localhost:8000/v1/asks",
    headers={"Content-Type": "application/json"},
    json={
        "query": "What’s the average order value by customer segment?",
        "mdl_hash": "your_model_hash",  # From your semantic layer
        "thread_id": "unique_session_123",  # For conversation history
        "user_id": "marketing_manager@company.com"
    }
)

query_result = response.json()
ask_id = query_result["ask_id"]

# Poll for results (async processing)
while True:
    status = requests.get(f"http://localhost:8000/v1/asks/{ask_id}/result").json()
    if status["status"] == "finished":
        sql = status["response"]["sql"]
        chart = status["response"]["chart"]
        summary = status["response"]["summary"]
        break

print(f"Generated SQL:\n{sql}")
print(f"Chart Type: {chart['type']}")
print(f"AI Summary: {summary}")

Explanation: This pattern shows WrenAI’s async API design. You POST a natural language question, receive an ask_id, then poll for results. The response includes the generated SQL, chart configuration, and a natural language summary. The thread_id enables conversational context, while user_id supports audit logging.

Example 4: Defining a Calculated Metric in MDL

Create sophisticated business metrics in your semantic layer:

{
  "name": "revenue",
  "table": "orders",
  "expression": "SUM(CASE WHEN status = 'completed' THEN total ELSE 0 END)",
  "description": "Total revenue from completed orders only",
  "format": "currency",
  "category": "Sales Metrics",
  "tags": ["kpi", "executive"]
}

Explanation: This MDL snippet defines a calculated metric that filters out incomplete orders. The expression uses standard SQL, ensuring the LLM generates correct aggregations. Tags and categories help organize metrics for different user roles. The format hint tells the UI to display as currency.

Advanced Usage & Best Practices

To get the most out of WrenAI, implement these pro strategies:

Optimize Your Semantic Layer Incrementally: Start with auto-generated MDL models, then iteratively refine. Add calculated metrics for complex business logic. Use descriptions extensively—every column should have a clear, concise explanation. This is your LLM’s instruction manual; make it comprehensive.

Choose the Right LLM for the Job: While WrenAI supports many models, performance varies dramatically. For production use, GPT-4 or Claude 3.5 Sonnet deliver the best accuracy. For cost-sensitive applications, DeepSeek or Groq models offer good performance at lower prices. Always test with your specific schema and query patterns.

Implement Query Caching: WrenAI’s AI service can cache generated SQL for common questions. Enable Redis caching in production to slash response times from seconds to milliseconds for repeated queries. This also reduces LLM API costs significantly.

Secure Your Deployment: Never expose WrenAI directly to the internet. Use reverse proxies with authentication. Leverage the user_id field in API calls to enforce row-level security at the database level. For sensitive data, consider the self-hosted deployment even if using the cloud version for redundancy.

Monitor and Fine-Tune: Enable query logging to identify where the LLM struggles. If certain question types consistently fail, add example pairs to your semantic layer’s few-shot learning section. Treat your MDL models as living documentation that evolves with your business.

WrenAI vs. Alternatives: Why It Wins

How does WrenAI stack up against other Text-to-SQL solutions? Let’s compare:

Feature WrenAI LangChain SQL Agent LlamaIndex SQL Custom GPT Wrapper
Setup Time 3 minutes with Docker Hours of coding Hours of coding Days of development
Semantic Layer Built-in MDL models Manual prompt engineering Manual schema indexing Completely manual
Accuracy High (schema-aware) Medium (generic) Medium (depends on indexing) Low to Medium
Supported Databases 12+ native connectors 3-4 basic connectors 3-4 basic connectors 1-2 (custom)
Chart Generation Native Text-to-Chart Requires integration Requires integration Manual implementation
Embedding API Production-ready REST Build from scratch Build from scratch Build from scratch
Governance MDL models ensure consistency No built-in governance No built-in governance No built-in governance
Community 1.3k+ Discord, active GitHub Large but fragmented Growing but small None

Key Differentiator: The semantic layer. While alternatives require you to stuff schema information into prompts (hitting token limits and creating inconsistency), WrenAI’s MDL models provide a persistent, version-controlled knowledge base that the LLM references intelligently. This is the difference between a brittle demo and a production system.

Cost Consideration: WrenAI’s open-source version is free; you only pay for LLM usage. The cloud version adds convenience but follows similar pricing. Building a comparable solution with LangChain would require months of engineering time—easily $50k+ in opportunity cost.

Frequently Asked Questions

How accurate is WrenAI’s SQL generation?

With a well-defined semantic layer, accuracy exceeds 90% for common analytical queries. The MDL models prevent the hallucination issues that plague generic LLM solutions. For edge cases, you can add validation rules and example queries to improve performance.

Which LLM provider works best?

OpenAI’s GPT-4 and Anthropic’s Claude 3.5 Sonnet deliver the highest accuracy. For self-hosted deployments, Llama 3 70B provides good results. Avoid smaller models—they struggle with complex joins and window functions.

Is my database data sent to external LLMs?

Only your schema metadata (table names, column types) and natural language questions are sent. No actual data rows leave your system unless you’re using the cloud version. For maximum security, self-host with a local LLM via Ollama.

Can WrenAI handle complex multi-join queries?

Absolutely. The semantic layer explicitly defines relationships, so the LLM knows exactly how to join tables. Users have successfully generated queries with 8+ joins and multiple subqueries. The key is properly modeling your relationships in the MDL.

What’s the difference between OSS and Cloud versions?

The open-source version gives you full control and requires self-hosting. The cloud version offers managed infrastructure, automatic updates, and premium support. Both share the same core engine. Large enterprises often use both: OSS for sensitive data, Cloud for convenience.

How does it compare to traditional BI tools like Tableau or Looker?

Traditional BI requires manual dashboard building. WrenAI is conversational and exploratory. It’s not about replacing BI tools entirely, but augmenting them with AI-powered ad-hoc analysis. Many users connect WrenAI to their existing BI databases.

Can I customize the visualization styles?

Yes. The API returns chart configurations as JSON. You can override colors, fonts, and layouts to match your brand. The UI is built with React and can be customized or replaced entirely via the API.

Conclusion: The Future of Analytics is Conversational

WrenAI isn’t just another AI wrapper—it’s a fundamental reimagining of how humans interact with data. By combining a robust semantic layer with state-of-the-art LLMs, it delivers on the promise that has eluded the BI industry for decades: truly self-service analytics that actually works.

The implications are massive. Data teams become force multipliers instead of bottlenecks. Business users become data-driven by default. Developers can embed sophisticated analytics into applications with minimal effort. This is the future we’ve been waiting for.

What excites me most is the open-source commitment. In a world where AI capabilities are increasingly centralized, WrenAI puts power back in the hands of developers. You can audit the code, extend it for your needs, and contribute improvements. The vibrant community ensures it evolves rapidly.

If you’re still making stakeholders wait days for reports, or if you’re spending more time writing SQL than building features, WrenAI is your escape hatch. The 3-minute setup claim is real. The accuracy is real. The community support is real.

Don’t take my word for it—try it yourself. Head to github.com/Canner/WrenAI, star the repo, and spin up a local instance. Join the Discord community to see how others are revolutionizing their analytics workflows. The future of BI is here, and it speaks your language.


Ready to transform your data workflow? Get started with WrenAI today →

Advertisement

Comments (0)

No comments yet. Be the first to share your thoughts!

Leave a Comment

Apps & Tools Open Source

Apps & Tools Open Source

Bright Coding Prompt

Bright Coding Prompt

Categories

Coding 7 No-Code 2 Automation 14 AI-Powered Content Creation 1 automated video editing 1 Tools 12 Open Source 24 AI 21 Gaming 1 Productivity 16 Security 4 Music Apps 1 Mobile 3 Technology 19 Digital Transformation 2 Fintech 6 Cryptocurrency 2 Trading 2 Cybersecurity 10 Web Development 16 Frontend 1 Marketing 1 Scientific Research 2 Devops 10 Developer 2 Software Development 6 Entrepreneurship 1 Maching learning 2 Data Engineering 3 Linux Tutorials 1 Linux 3 Data Science 4 Server 1 Self-Hosted 6 Homelab 2 File transfert 1 Photo Editing 1 Data Visualization 3 iOS Hacks 1 React Native 1 prompts 1 Wordpress 1 WordPressAI 1 Education 1 Design 1 Streaming 2 LLM 1 Algorithmic Trading 2 Internet of Things 1 Data Privacy 1 AI Security 2 Digital Media 2 Self-Hosting 3 OCR 1 Defi 1 Dental Technology 1 Artificial Intelligence in Healthcare 1 Electronic 2 DIY Audio 1 Academic Writing 1 Technical Documentation 1 Publishing 1 Broadcasting 1 Database 3 Smart Home 1 Business Intelligence 1 Workflow 1 Developer Tools 144 Developer Technologies 3 Payments 1 Development 4 Desktop Environments 1 React 4 Project Management 1 Neurodiversity 1 Remote Communication 1 Machine Learning 14 System Administration 1 Natural Language Processing 1 Data Analysis 1 WhatsApp 1 Library Management 2 Self-Hosted Solutions 2 Blogging 1 IPTV Management 1 Workflow Automation 1 Artificial Intelligence 11 macOS 3 Privacy 1 Manufacturing 1 AI Development 11 Freelancing 1 Invoicing 1 AI & Machine Learning 7 Development Tools 3 CLI Tools 1 OSINT 1 Investigation 1 Backend Development 1 AI/ML 19 Windows 1 Privacy Tools 3 Computer Vision 6 Networking 1 DevOps Tools 3 AI Tools 8 Developer Productivity 6 CSS Frameworks 1 Web Development Tools 1 Cloudflare 1 GraphQL 1 Database Management 1 Educational Technology 1 AI Programming 3 Machine Learning Tools 2 Python Development 2 IoT & Hardware 1 Apple Ecosystem 1 JavaScript 6 AI-Assisted Development 2 Python 2 Document Generation 3 Email 1 macOS Utilities 1 Virtualization 3 Browser Automation 1 AI Development Tools 1 Docker 2 Mobile Development 4 Marketing Technology 1 Open Source Tools 8 Documentation 1 Web Scraping 2 iOS Development 3 Mobile Apps 1 Mobile Tools 2 Android Development 3 macOS Development 1 Web Browsers 1 API Management 1 UI Components 1 React Development 1 UI/UX Design 1 Digital Forensics 1 Music Software 2 API Development 3 Business Software 1 ESP32 Projects 1 Media Server 1 Container Orchestration 1 Speech Recognition 1 Media Automation 1 Media Management 1 Self-Hosted Software 1 Java Development 1 Desktop Applications 1 AI Automation 2 AI Assistant 1 Linux Software 1 Node.js 1 3D Printing 1 Low-Code Platforms 1 Software-Defined Radio 2 CLI Utilities 1 Music Production 1 Monitoring 1 IoT 1 Hardware Programming 1 Godot 1 Game Development Tools 1 IoT Projects 1 ESP32 Development 1 Career Development 1 Python Tools 1 Product Management 1 Python Libraries 1 Legal Tech 1 Home Automation 1 Robotics 1 Hardware Hacking 1 macOS Apps 3 Game Development 1 Network Security 1 Terminal Applications 1 Data Recovery 1 Developer Resources 1 Video Editing 1 AI Integration 4 SEO Tools 1 macOS Applications 1 Penetration Testing 1 System Design 1 Edge AI 1 Audio Production 1 Live Streaming Technology 1 Music Technology 1 Generative AI 1 Flutter Development 1 Privacy Software 1 API Integration 1 Android Security 1 Cloud Computing 1 AI Engineering 1 Command Line Utilities 1 Audio Processing 1 Swift Development 1 AI Frameworks 1 Multi-Agent Systems 1 JavaScript Frameworks 1 Media Applications 1 Mathematical Visualization 1 AI Infrastructure 1 Edge Computing 1 Financial Technology 2 Security Tools 1 AI/ML Tools 1 3D Graphics 2 Database Technology 1 Observability 1 RSS Readers 1 Next.js 1 SaaS Development 1 Docker Tools 1 DevOps Monitoring 1 Visual Programming 1 Testing Tools 1 Video Processing 1 Database Tools 1 Family Technology 1 Open Source Software 1 Motion Capture 1 Scientific Computing 1 Infrastructure 1 CLI Applications 1 AI and Machine Learning 1 Finance/Trading 1 Cloud Infrastructure 1 Quantum Computing 1
Advertisement
Advertisement