Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions test/test-markdown-frontmatter.js
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ const chalk = require('chalk')
// accepted data field values
const sdk_languages = ['nodejs', 'scala', 'python', 'swift', 'csharp', 'objective-c', 'android-java', 'any', 'java', 'kotlin', 'dart', 'golang', 'c++']

const tags = ['Ottoman', 'Ktor', 'REST API', 'Express', 'Flask', 'TLS', 'Configuration', 'Next.js', 'iOS', 'Xcode', '.NET', 'Xamarin', 'Authentication', 'OpenID', 'Keycloak', 'Android', 'P2P', 'UIKit', 'Installation', 'Spring Boot', 'Spring Data', 'Transactions', 'SQL++ (N1QL)', 'Optimization', 'Community Edition', 'Docker', 'Data Modeling', 'Metadata', 'Best Practices', 'Data Ingestion', 'Kafka', 'Support', 'Customer', 'Prometheus', 'Monitoring', 'Observability', 'Metrics', 'Query Workbench', 'ASP.NET', 'linq', 'DBaaS', 'App Services', 'Flutter', 'Gin Gonic', 'FastAPI', 'LangChain', "OpenAI", "Streamlit", 'Google Gemini', 'Nvidia NIM', 'LLama3', 'AWS', 'Artificial Intelligence', 'Cohere', 'Jina AI', 'Mistral AI', 'Ragas', 'Haystack', 'LangGraph', 'Amazon Bedrock', 'CrewAI', 'PydanticAI', 'C++', 'C++ SDK', 'smolagents', 'Ag2', 'Autogen', 'Couchbase Edge Server', 'Deepseek', 'OpenRouter', 'mastra']
const tags = ['Ottoman', 'Ktor', 'REST API', 'Express', 'Flask', 'TLS', 'Configuration', 'Next.js', 'iOS', 'Xcode', '.NET', 'Xamarin', 'Authentication', 'OpenID', 'Keycloak', 'Android', 'P2P', 'UIKit', 'Installation', 'Spring Boot', 'Spring Data', 'Transactions', 'SQL++ (N1QL)', 'Optimization', 'Community Edition', 'Docker', 'Data Modeling', 'Metadata', 'Best Practices', 'Data Ingestion', 'Kafka', 'Support', 'Customer', 'Prometheus', 'Monitoring', 'Observability', 'Metrics', 'Query Workbench', 'ASP.NET', 'linq', 'DBaaS', 'App Services', 'Flutter', 'Gin Gonic', 'FastAPI', 'LangChain', "OpenAI", "Streamlit", 'Google Gemini', 'Nvidia NIM', 'LLama3', 'AWS', 'Artificial Intelligence', 'Cohere', 'Jina AI', 'Mistral AI', 'Ragas', 'Haystack', 'LangGraph', 'Amazon Bedrock', 'CrewAI', 'PydanticAI', 'C++', 'C++ SDK', 'smolagents', 'Ag2', 'Autogen', 'Couchbase Edge Server', 'Deepseek', 'OpenRouter', 'mastra', 'Looker Studio', 'Google Data Studio', 'Connector', 'Couchbase Columnar', 'Views-only', 'Data API']

const technologies = ['connectors', 'kv', 'query', 'capella', 'server', 'index', 'mobile', 'fts', 'sync gateway', 'eventing', 'analytics', 'udf', 'vector search', 'react', 'edge-server', 'app-services']

Expand Down Expand Up @@ -95,7 +95,7 @@ const test = (data, path) => {
process.exit(1)
}
//testing title length
if (data.title?.length > 72) {
if (data.title?.length > 100) {
makeResponseFailure(data, path, 'Invalid title Length', data.title?.length, 'Post title must be less than 72 characters long')
process.exit(1)
}
Expand Down
122 changes: 122 additions & 0 deletions tutorial/markdown/connectors/looker-studio/columnar/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
---
# frontmatter
path: "/tutorial-looker-studio-columnar"
# title and description do not need to be added to markdown, start with H2 (##)
title: Looker Studio with Couchbase Columnar (Views-only with Tabular Analytics Views)
short_title: Columnar (Views-only TAVs)
description:
- Connect Google Looker Studio to Couchbase Columnar using Tabular Analytics Views (TAVs) only
- Create Tabular Analytics Views in Capella and use them as stable, optimized datasets
- Learn authentication, configuration, schema inference, and troubleshooting
content_type: tutorial
filter: connectors
technology:
- server
- query
tags:
- Looker Studio
- Couchbase Columnar
- Connector
- Views-only
sdk_language:
- nodejs
length: 20 Mins
---

<!-- [abstract] -->

## Overview

This is a views-only connector for Google Looker Studio and Couchbase Columnar. It exclusively reads from Couchbase Tabular Analytics Views (TAVs) in Capella. Create one or more TAVs first, then connect Looker Studio to those views for analysis.

The connector authenticates with Basic Auth to the Columnar API (`/api/v1/request`) and infers schema automatically using `array_infer_schema` so Looker Studio fields are created with reasonable types.

## Prerequisites

- A Couchbase Columnar deployment reachable from Looker Studio.
- A database user with permissions to read from the target Tabular Analytics Views (TAVs) and execute queries.
- Network access from Looker Studio to your Columnar host.

## Authentication

When adding the data source, provide:

- Path: The Columnar host (optionally with port). Examples:
- Capella-style host: `cb.<your-host>.cloud.couchbase.com`
- Self-managed: `my.host:18095` (port recommended if not 443)
- Username and Password: Database credentials.

The connector validates credentials by running a lightweight test query (`SELECT 1 AS test;`).

## Create Tabular Analytics Views (TAVs) in Capella (Required)

Before connecting, create Tabular Analytics Views in Capella:

1. Open your Capella cluster, go to the Analytics tab, and launch the Analytics Workbench.
2. Prepare a SQL++ query that returns a flat, tabular result (flatten nested objects where needed). For example:

```sql
SELECT airportname AS airportname,
city AS city,
country AS country
FROM `travel-sample`.`inventory`.`airport`
WHERE country = 'United States';
```

3. Run the query, then click Save as View → Annotate for Tabular View. Define the schema (column names, data types, and primary keys) and save with a descriptive name.

- For details, see [Tabular Analytics Views](https://docs.couchbase.com/columnar/query/views-tavs.html) and [Buckets, Scopes, and Collections](https://docs.couchbase.com/cloud/clusters/data-service/about-buckets-scopes-collections.html).

## Configuration

Choose your mode in the configuration screen:

- Configuration Mode: `By View` (views-only connector).

### Mode: By View (TAV)

- Couchbase Database, Scope, View: Selected from dropdowns populated from metadata.
- Maximum Rows: Optional limit for returned rows; leave blank for no limit.

What runs:

- Data: `SELECT <requested fields or *> FROM \`database\`.\`scope\`.\`view\` [LIMIT n]`
- Schema: `SELECT array_infer_schema((SELECT VALUE t FROM \`database\`.\`scope\`.\`view\` [LIMIT n])) AS inferred_schema;`

> Note: This connector does not query collections directly and does not accept custom queries. It reads through Tabular Analytics Views (TAVs) only.

## Schema and Field Types

- The connector converts inferred types to Looker types:
- number → NUMBER (metric)
- boolean → BOOLEAN (dimension)
- string/objects/arrays/null → STRING/TEXT (dimension)
- Nested fields are flattened using dot and array index notation where possible (for example, `address.city`, `schedule[0].day`). Unstructured values may be stringified.

## Data Retrieval

- Only requested fields are projected. For nested fields, the connector fetches the required base fields and extracts values client-side.
- Row limits:
- View mode: `Maximum Rows` controls `LIMIT` (blank = no limit).

## Tips and Best Practices

- Prefer Tabular Analytics Views for BI tooling; they offer a stable, optimized interface.
- Keep datasets scoped and use `LIMIT` while exploring.

## Troubleshooting

- Authentication failure: Check host/port, credentials, and network reachability to Columnar.
- Schema inference errors: Ensure your entity or query returns rows; consider adding `LIMIT` for faster sampling.
- API error from Columnar: Review the response message surfaced in Looker Studio and verify entity names, permissions, and syntax.

## Future Scope (Prototype)

- Collections and custom query support are in prototype and not available in this views-only connector. As support expands, you’ll be able to query collections directly from Looker Studio in addition to TAVs.

## Next Steps

- Build charts in Looker Studio using your TAV-backed fields.
- Iterate on Views/queries to shape the dataset for analytics.


112 changes: 112 additions & 0 deletions tutorial/markdown/connectors/looker-studio/dataapi/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
---
# frontmatter
path: "/tutorial-looker-studio-dataapi"
# title and description do not need to be added to markdown, start with H2 (##)
title: Looker Studio with Couchbase Data API
short_title: Data API Connector
description:
- Connect Google Looker Studio to Couchbase through the Data API
- Configure auth, select collections or use custom SQL++ queries
- Learn schema inference, limits, and troubleshooting tips
content_type: tutorial
filter: connectors
technology:
- server
- query
tags:
- Looker Studio
- Google Data Studio
- Data API
- Connector
sdk_language:
- nodejs
length: 20 Mins
---

<!-- [abstract] -->

## Overview

Use this connector to build Looker Studio reports directly on Couchbase via the Data API. You can:

- Query by selecting a specific `bucket.scope.collection`.
- Or run a custom SQL++ query.

Behind the scenes, the connector authenticates with Basic Auth and talks to the Data API endpoints for caller identity checks and to the Query Service for SQL++ execution. Schema is inferred automatically from sampled data to make fields available in Looker Studio.

## Prerequisites

- A Couchbase Capella cluster or a self-managed cluster with the Query Service reachable from Looker Studio.
- A database user with permissions to read the target collections and run queries.
- Network access from Looker Studio to your cluster host.

## Authentication

When you add the data source in Looker Studio, you will be prompted for:

- Path: The cluster host (optionally with port). Examples:
- Capella: `cb.<your-host>.cloud.couchbase.com`
- Self-managed: `my.host:18095` (specify a non-443 port explicitly)
- Username and Password: Database credentials.

The connector validates credentials against the Data API (`/v1/callerIdentity`). If validation fails, verify host, port, credentials, and network access.

## Configuration

After authentication, choose a configuration mode:

- Configuration Mode: `Query by Collection` or `Use Custom Query`.

### Mode: Query by Collection

- Couchbase Collection: Pick a `bucket > scope > collection` from the dropdown. The connector discovers collections for you.
- Maximum Rows: Optional limit for returned rows (default 100).

What runs:

- Data: `SELECT RAW collection FROM \`bucket\`.\`scope\`.\`collection\` LIMIT <maxRows>`
- Schema: `INFER \`bucket\`.\`scope\`.\`collection\` WITH {"sample_size": 100, "num_sample_values": 3, "similarity_metric": 0.6}`

### Mode: Use Custom Query

- Custom SQL++ Query: Paste any valid SQL++ statement. Include a `LIMIT` for performance.

What runs:

- Schema inference first attempts to run `INFER` on your query (a `LIMIT 100` is added if absent): `INFER (<yourQuery>) WITH {"sample_size": 10000, "num_sample_values": 2, "similarity_metric": 0.1}`
- If that fails, it runs your query with `LIMIT 1` and infers the schema from one sample document.

## Schema and Field Types

- Fields are inferred from sampled data. Types map to Looker Studio as:
- NUMBER → metric
- BOOLEAN → dimension
- STRING (default for text, objects, arrays) → dimension
- Nested fields use dot notation (for example, `address.city`). Arrays and objects not expanded become stringified values.
- If the collection has no documents or your query returns no rows, schema inference will fail.

## Data Retrieval

- Only the fields requested by Looker Studio are returned. Nested values are extracted using dot paths where possible.
- Row limits:
- Collection mode: `Maximum Rows` controls the `LIMIT` (default 100).
- Custom query mode: You control `LIMIT` inside your query.

## Tips and Best Practices

- Prefer `Query by Collection` for quick starts and simpler schemas.
- Always add a `LIMIT` when exploring with custom queries.
- Ensure your user has at least query and read access on the target collections.

## Troubleshooting

- Authentication error: Check host/port, credentials, and that the Data API is reachable from Looker Studio.
- Empty schema or no fields: Ensure the collection has data; for custom queries, verify the statement and add `LIMIT` to improve sampling.
- Query errors from the service: Review the error text surfaced in Looker Studio; fix syntax, permissions, or keyspace names.

## Next Steps

- Create charts and tables in Looker Studio from the exposed fields.
- Iterate on custom SQL++ queries to shape the dataset for your dashboards.