GenAI
Generate content using the Generative AI
Artificial Intelligence. Technology that applies intelligence and automation to observe environments, detect anomalous behavior, and execute automated investigations. Language Model of the Riverbed Platform.
Enabling the GenAI node
Please enable Riverbed IQ Ops Assist, which is artificial intelligence functionality based on a large language model, from the Riverbed IQ Ops Assist Configuration page.
For additional information regarding Riverbed IQ Ops Assist, please view our AI in Products documentation in the Riverbed Trust Center.
AI-Generation disclaimer
When using Generative AI technologies to create and generate content, the target audience must be aware that the content is AI generated. We recommend adding a disclaimer to the output of the content generated using the GenAI node. For example, appending "AI-Generated content".
Properties
Node Label: Type an informative name for the GenAI node
Individual components that make up a runbook automation, each performing a specific function such as data queries, transformations, logic, integrations, or visualizations.. You may also keep the system provided default of "GenAI".
Debug: Select Debug to receive debug data when the node executes.
Model Instructions and Context
Write the instructions to the Generative AI model or use a variable to pass the instructions in. The instructions are a "system prompt" where you can describe:
-
How the Generative AI model should be.
-
What it should and should not answer.
-
Tell it how to format responses.
The "Help me generating instructions" button can assist you writing the instructions. For example: "Analyze a time series of round trip time".
Query
Select an item to specify what is used as the query to feed the Generative AI model. You can use the content of a variable, the trigger
A set of one or more indicators that have been correlated based on certain relationships, such as time, metric type, application affected, location, or network device., the output
A document containing data sets generated by the execution of a runbook, including output of queries and reports from point products, as well as output of analysis or other runbook nodes. of a parent node, or you can provide the query text.
Inference config
Expand Inference Config, above Output, to set optional inference parameters that are sent to the Generative AI service with each request. Every field is optional; leave a field unset to rely on the service default for that parameter.
-
maxTokens: Upper limit on how many tokens the model may generate in the response.
-
temperature: Influences how varied or deterministic the generated text is.
-
topP: Nucleus sampling cutoff (the model considers tokens whose cumulative probability is within this top-p mass).
-
topK: Restricts sampling to the K highest-probability tokens at each step.
-
stopSequences: One or more sequences that end generation when the model produces them.
-
reasoningConfig: Optional reasoning settings for models that support configurable reasoning behavior.
When Debug is enabled for this node, runbook debug information includes the inference configuration that was applied for the run, the request details, the model response, and token usage returned by the service. For a structured view in the debug INSPECT tab, see Debug Inspect Tab by Node Type.
Runbook Compatibility
Incident, Lifecycle, On-Demand, External (Webhook), and Subflow
See the Riverbed Community Toolkit repository in GitHub to find examples of runbooks using the GenAI node, for example the External Runbook Demo - IQ Assist - Create ITSM ticket with Endpoint Diagnostic that works with Aternity Intelligent Service Desk Alerts (ISD) and ServiceNow
See the Riverbed Community Toolkit repository in GitHub to find examples of runbooks using the GenAI node, for example the External Runbook Demo - IQ Assist - Create ITSM ticket with Endpoint Diagnostic that works with Aternity Intelligent Service Desk Alerts (ISD) and ServiceNow