Riverbed IQ Ops Assist Configuration
This page explains how to request IQ Assist functionality for your Riverbed IQ Ops instance.
To navigate to the Riverbed IQ Ops Assist Configuration page:
- Click the Waffle Menu.
- Click IQ Ops > Management.
- In the Management page, click the Hamburger Icon, then click Riverbed IQ Ops Assist Configuration.
In Riverbed IQ Ops Assist Configuration, click Submit Request. This sends a request to Riverbed to enable the feature. After the feature is enabled, the page will display the following:
When the feature is enabled, the GenAI node is available for you to use in runbooks. See the GenAI node documentation for more information on how to use this node.
Connector information when IQ Assist is enabled
When IQ Ops Assist is enabled, the configuration page shows two tabs: Riverbed IQ Assist for Copilot and Riverbed IQ Assist for ServiceNow. The Copilot tab shows values used to configure the Riverbed IQ Assist connector in Microsoft Copilot or Power Platform. The ServiceNow tab shows values used to configure the Riverbed IQ Assist app in your ServiceNow instance (for example, under All > Riverbed IQ Assist > Configuration). Sensitive values are partially masked in the UI. Use the Copy icon next to each value to copy the full value to the clipboard.
For the full procedure to obtain these parameters, configure the connector in Microsoft Copilot or Power Platform, and build the ServiceNow Skills webhook URL, see Configuring the Riverbed IQ Assist connector for Microsoft Copilot and ServiceNow.
IQ Ops Assist
IQ Ops Assist is an optional feature. The available models and configuration options will be provided to you at the time of enablement. When enabled, Riverbed leverages large language model (LLM) inference services provided by its subprocessors.
AWS (Amazon Bedrock): Customer Data remains stored exclusively in AWS data centers in the customer-selected data center region. During inference, requests may be routed to other AWS regions within the same geography (e.g., NA, EU, or APAC) to optimize performance and availability. All data transmitted between regions remains on AWS's private network backbone and is encrypted in transit. It does not traverse the public internet. No Customer Data is stored outside the customer's selected region.
The table below maps the customer-selected AWS data center region to the corresponding inference processing locations for AWS.
|
Customer-Selected AWS Data Center Region |
IQ Ops Assist Model Processing Locations |
|---|---|
|
United States |
United States |
|
Canada |
Canada |
|
Australia |
Australia Japan South Korea India Singapore |
|
United Kingdom |
United Kingdom Germany Sweden Ireland France |
|
Germany |
Germany Sweden Ireland France |
Microsoft Azure (Azure AI Foundry): Inference requests are processed entirely within the customer's selected data center region in the Microsoft Azure data centers. Customer Data remains stored and processed within that region.
AI-Generation disclaimer
When using Generative AI technologies to create and generate content, the target audience must be aware that the content is AI generated. We recommend adding a disclaimer to the output of the content generated using the GenAI node. For example, appending "AI-Generated content".