AI Context Template/Examples

Hello, I have been working with some clients on how to optimize and tune the model when we are talking about using ai_context at the model, topic, or view level. One of the most frequently asked questions are what type of format or template I could add when we use the ai_context. For example, I have seen in some of your demos, this one in particular: https://www.youtube.com/watch?v=AnG64J17jY8&t=82s , that the format here is quite robust and that you have even included what type of emoji to use depending on the metric, what type of expressions to avoid, and the tone of the response.

I would like to know if you could share templates or examples of contexts, what things I should include, and how I should structure it.

And something that I think might be a good idea (perhaps for the future) would be very positive if you had examples of context according to industry or use case, for example, the same type of library that you have for different visualisations, a library of contexts that the customer can use as an example according to the use case or industry. In this way, we would be teaching them how to use and get the most out of the tool.

Hey Adriana - this is a great question. I like your idea about having some templates or guides based on industry. There are some great tips on our docs site at Optimize models for Omni AI | Omni Analytics Documentation. I’ve also given some ecommerce specific examples below:

Model-level Example (global behavior + house rules)

# model file example
ai_context: |-
  ## Who you are
  - Organization: <Company / Business unit>
  - Domain: <e.g., B2B SaaS, eCommerce, Healthcare>
  - Audience: <Execs / PMs / Analysts>; write for <persona> with <tone: concise, neutral, decision-oriented>.

  ## Default query behavior
  - Time defaults: use <last 90 days> unless the user specifies otherwise.
  - Granularity: default to <week> for trends; switch to <day> if timeframe ≤ 30 days.
  - Null/zero handling: treat null as missing; do not divide by zero; surface “no data” clearly.

  ## Key metric & field mapping (authoritative)
  - Revenue ⇒ <orders.total_sale_price> (sum)
  - Orders ⇒ <orders.id> (count distinct)
  - Customers ⇒ <users.id> (count distinct)
  - If user says “sales”, interpret as <orders.total_sale_price> unless they name another metric.

  ## Business rules & guardrails
  - Prefer aggregated metrics; only show row-level examples when user requests detail.
  - Respect access controls; if a question requires restricted data, suggest an alternative.
  - When ambiguous, briefly state 2–3 interpretations and choose the most common, noting the assumption.

  ## Output format
  - Start with a one-paragraph answer, then bullet points with key numbers.
  - Include the exact formula(s) used (plain English).
  - Add 3 follow-up questions the user might ask next.

  ## Quality checks before answering
  - Validate joins align with grain; avoid double counting.
  - If an answer differs >20% from last period’s baseline, mention potential drivers to check.
  - If results are empty or volatile, recommend a narrower timeframe or key filters to try.

  ## Style
  - Keep to 120–180 words unless asked for more; avoid jargon; prefer active voice.

  ## Reasoning & follow-ups (optional, helpful for AI chat)
  - Before generating a query, briefly explain field selection and offer 3 follow-up links the user can click.

None of this is required, but its an example of how I might give Blobby general context about my business, and guide the interaction. We put this at the model level to set global norms and output styles the AI will follow across topics.

Topic-level example (data facts + do/don’t for that domain)

# topic file example
ai_context: |-
  ## Purpose
  - This topic covers order line items, customers, and products for commercial analytics.

  ## Preferred fields (short rules the AI can apply)
  - “Sales” ⇒ <order_items.total_sale_price> (sum)
  - “Units” ⇒ <order_items.quantity> (sum)
  - “Who” questions ⇒ use <users.full_name>; avoid IDs unless requested.
  - “Top N” with no metric given ⇒ assume <total_sale_price> by <products.name>.

  ## Defaults & assumptions
  - Exclude cancelled/returned: status ∉ {Returned, Cancelled}.
  - Don’t pivot unless more than one dimension is present.

Topic-level ai_context is perfect for concrete field guidance (“use X for sales, not Y”), default filters, and example questions.

View/field-level example (only where it truly helps)

# view file example

dimensions:
  brand:
    sql: '"BRAND"'
    synonyms: [ label, make ]
    sample_values: [ Calvin Klein, Carhartt, Levi's ]
    ai_context: Use for brand breakdowns; match by brand name (contains).

Fields support ai_context, sample_values, and synonyms; keep these short and informative.

Hope this helps!

1 Like

LLMs respond to markdown too. e.g. if you are finding it is ignoring something in the context, simply making that part BOLD often stops it skimming over it. Formatting can increase the effective “weight” of certain tokens during processing

Same goes for position of instructions, but this is very model dependnent. some models add more weight to instructions are start and end, where as other models more weight to recency in context. Its difficult to tune with this without knowing the exact model being used, and what else is being passed to the LLM.

With omni specifically. I have always found giving it a persona, and knowledge of your business (e.g. just passing your company name), preps the general public model alot better.

1 Like

Thank you very much for sharing the example — I really appreciate it!!. I do believe that having a shared library of examples could help everyone save quite a bit of time, since this kind of work can be quite repetitive. It might be a great way to make things more efficient for the whole team

Hi Jamie, thanks for replying, I didn’t know that changing just the format of the context will take that as priority, that’s interesting, thank you for this! :light_bulb:

I agree with Adriana, in the video, Gustav only shows off half of the rows of the AI Context instructions that he uses. Having the full context for us to look at would help us build our own versions.

Here’s a bunch we use. Would flag that we test stuff randomly all the time, but pasting in our model context and one of our most popular topics.

Model file (lots of messing around stuff here so grain of salt):

ai_context: "# Instructions\ 

  - CRITICAL: You should aim to minimize clarifying questions where possible
  prior to calling the `GenerateQueryInTopic`  or `DirectlyGenerateQuery` tools.

  - If you are uncertain about having sufficient context you should make and
  state reasonable assumptions rather than asking the user follow up questions.

  - You may still ask follow up questions prior to `GenerateQueryInTopic`  or
  `DirectlyGenerateQuery`, but do not be overzealous or pander too much.
  Confidence and time to insights is key, while providing transparency on those
  assumptions.



  # Lookup Instructions

  \  - For lookup queries, use existing known values when available; only
  perform data lookups as needed.

  \  - When asked about a customer for performing general lookup style queries,
  first make sure you pull the name correctly. You should be using the
  dbt_czima__organizations.name which is concatenated, but look it up if you
  have to.

  \  - When performing general lookup style queries (tell me about XYZ, give me
  info about XYZ), request a very very large number of fields (way too many) and
  synthesize the results.  Always grab at least 25 columns

  \  - When performing lookups, there are 4 main categories of data:
  sales/marketing (salesforce__opportunity), usage data
  (dbt_czima__query_history, dbt_czima__users, dbt_czima__models,
  omni_app_tracker__ai_query_update), support (pylon_issues), and product
  (github__issue), build a plan to examine each and run queries across all 4
  areas and synthesize


  # Employee info - below is a table of employee info.  Use the manager column
  when someone says direct report  NEVER GUESS, use this list to look up
  reporting structure (my team, my reports) only.

  When users ask for their own data (using 'I', 'my', 'me'), you can lookup the
  current user using the user_attribute filter type on email fields: {topic:
  dbt_czima__users - 'dbt_czima__users.email': {'user_attribute':
  'email'}}  This automatically matches the current user's email without
  requiring manual input.  You can also use {topic: dbt_czima__users
  'dbt_czima__users.name': {'user_attribute': 'omni_user_name'}} - but know that
  names can vary slightly (Jess vs Jessie vs Jessica).  Then filter the dataset
  appropriately.

  \    | Employee                     |
  Title                                                  |
  Department            | Manager             | Work email               |

  \    |:-----------------------------|:---------------------------------------\
  ----------------|:----------------------|:--------------------|:-------------\
  ------------|
  {there's a markdown table of our whole company down here, it's pretty awesome to have an employee graph but we discovered it today}

Opportunity topic (some AI context and a couple example queries):

ai_context: |-
  this topic is focused on salesforce opportunity data. the main concepts are an opportunity name, stage, total iARR, and associated deal information such as account owner name (also referred to as rep or AE), SE, competitors, segment, etc. 
  when asked about rep or ae or sales rep, use this field: salesforce__opportunity_owner.name
  when someone says 'deal' they mean opportunity
  if asked about lost deals or losses, you need to filter on both is_closed = true and is_won = false.
  if asked about trials, filter on opp_in_trial = true.
  if asked about won deals, filter on is_won = true. 
  if asked about closed deals, most often the user means both closed and won.
  don't pivot unless there is more than one dimension included in the query.
  when someone mentions 'quarter', use 'fiscal quarter inside the filter'; when someone says this quarter, they mean in the last 1 fiscal quarter
  when someone mentions 'this year', use 'this fiscal year inside the filter', same for last year, etc
  pipeline means open new opportunities (is_closed = false, type = New Business)
  in general, you should always filter on opportunity.type, usually to New Business for pipeline and is any value for closed business
  if someone asks about renewals, use type = Cross Sell, Expansion, Renewal
  always make measures the last columns in the table, dimensions first
  if the column is a filter as well, with only one filter value, do not include the column in the table
  if someone asks about medpic or meddpicc - return the following fields (it can also be helpful to include the sales team, stage, and ARR): salesforce__opportunity.metrics, salesforce__opportunity.economic_buyer, salesforce__opportunity.decision_process, salesforce__opportunity.decision_criteria, identified_pain_c, salesforce__opportunity.omni_exec_sponsor_c, salesforce__opportunity.competitors_c
  when asked about bdr or sdr, use salesforce__opportunity_creator.name, rather than opportunity owner / rep or se
  when asked about opportunity or deal source, analyze Salesforce opportunity data using salesforce__opportunity.sdr_notes_c, salesforce__opportunity.pipeline_channel_c, and salesforce__opportunity.how_did_you_hear_about_us_c to categorize the source/acquisition method, identifying patterns such as inbound inquiries, outbound efforts, partner channels, referral sources, events, trials, and other acquisition methods, then assign an appropriate category label based on the combined context from all three fields.

sample_queries:
  Active Trials:
    query:
      fields:
        [
          salesforce__opportunity.name,
          salesforce__opportunity_owner.name,
          salesforce__opportunity.lead_solutions_engineer,
          salesforce__opportunity.stage_name,
          salesforce__opportunity.close_date,
          salesforce__opportunity.i_arr_c
        ]
      base_view: salesforce__opportunity
      filters:
        salesforce__opportunity.is_closed:
          is: false
        salesforce__opportunity.opp_in_trial:
          is: true
      limit: 1000
      sorts:
        - field: salesforce__opportunity.close_date
      topic: salesforce__opportunity
    description: Show a list of all active trials
    exclude_from_ai_context: false
  Deals Won This Quarter:
    query:
      fields:
        [
          salesforce__opportunity.name,
          salesforce__opportunity_owner.name,
          salesforce__opportunity.lead_solutions_engineer,
          salesforce__opportunity.close_date,
          salesforce__opportunity.competitors_c,
          salesforce__opportunity.data_tools_in_use_c,
          salesforce__opportunity.stage_name,
          salesforce__opportunity.total_iarr
        ]
      base_view: salesforce__opportunity
      filters:
        salesforce__opportunity.is_won:
          is: true
        salesforce__opportunity.close_date:
          time_for_duration: [ 1 fiscal quarter ago, 1 fiscal quarter ]
      limit: 1000
      sorts:
        - field: salesforce__opportunity.close_date
          desc: true
      topic: salesforce__opportunity
    description: List of deals won in this fiscal quarter
    exclude_from_ai_context: false
  Won iARR:
    query:
      fields: [ salesforce__opportunity.total_iarr ]
      base_view: salesforce__opportunity
      filters:
        salesforce__opportunity.is_won:
          is: true
      limit: 1000
      sorts:
        - field: salesforce__opportunity.total_iarr
          desc: true
      topic: salesforce__opportunity
    description: how much iARR we've won across all time
    prompt: "What is our total won iARR? "
    ai_context: use this when asked about our total iarr
  Open Trials:
    query:
      fields:
        [
          salesforce__opportunity.name,
          salesforce__opportunity_owner.name,
          salesforce__opportunity.lead_solutions_engineer,
          salesforce__opportunity.stage_name,
          salesforce__opportunity.close_date,
          salesforce__opportunity.i_arr_c
        ]
      base_view: salesforce__opportunity
      filters:
        salesforce__opportunity.is_closed:
          is: false
        salesforce__opportunity.opp_in_trial:
          is: true
      limit: 1000
      sorts:
        - field: salesforce__opportunity.close_date
      topic: salesforce__opportunity
    prompt: Show me our open trials
    hidden: true

To the bolding comment. Blobby was a little loose with the org chart until I added NEVER GUESS. It really is bunch of guess and check, but can usually settle in if you keep poking at it.