SplashBI AI Policy
1. Purpose
This policy defines how SplashBI governs the use, development, and deployment of AI capabilities (SplashAI) used to support customer services, including natural-language analytics (e.g., text-to-SQL, conversational analytics) and related AI-assisted workflows. It establishes requirements for security, privacy, transparency, quality, monitoring, and third-party risk management.
2. Scope
This policy applies to:
- AI features embedded in SplashBI products and services (e.g., conversational analytics, query assistance, follow-up Q&A).
- AI used by SplashBI personnel in delivery/support activities for customer engagements.
- AI-related assets: models, SDKs, packages, plugins/agents/tools, embeddings, vector stores, and orchestration services.
3. Definitions
- Foundation Model: Large-scale third-party model used for generative tasks.
- Inference: Runtime use of a model to generate outputs without modifying model weights.
- Training/Fine-Tuning: Any process that updates model parameters/weights.
- Customer Data: Data provided by or derived from a customer environment, including aggregated outputs.
4. Governance and Accountability
SplashBI maintains an internal AI governance process spanning product, engineering, security, and compliance. AI capabilities are reviewed for:
- Intended use and risk level
- Data handling and exposure paths
- Security controls and vendor controls
- Testing/validation and monitoring readiness
High-risk changes (new model/provider, new data classes, new autonomous behavior) require explicit security/compliance review prior to production release.
5. Approved AI Usage Patterns
SplashBI’s AI capabilities are designed primarily for natural-language analytics:
- A user asks a question in plain language.
- The system provides governed metadata/semantic context (e.g., schema/entity relationships) and the prompt to generate SQL.
- SQL executes against authorized reporting data sources.
- Results return to the application layer and are rendered as dashboards/visualizations.
- For follow-up questions, the system may provide aggregated/minimized prior results (e.g., JSON outputs) to preserve conversational continuity.
AI is intended as decision support for analytics workflows, not an autonomous decision-maker for regulated or consequential decisions.
6. Hosting and Deployment Standard
- SplashBI primarily operates AI services in Oracle Cloud Infrastructure (OCI).
- AI components are hosted in a private, controlled environment (not public/consumer AI instances).
- OCI HeatWave may be used for vector storage and embedding workflows.
- For on-prem customers, SplashBI provisions a dedicated OCI environment for AI/model services and connects securely to customer-authorized on-prem data sources.
- SplashBI does not support multi-cloud deployments for its AI hosting standard.
7. Data Handling and Minimization
7.1 Allowed Inputs to AI Components (Inference)
AI components may process only the minimum required information, such as:
- User prompts/questions
- Approved metadata/semantic context (e.g., schema, entity relationships, governance metadata)
- Aggregated/minimized outputs (e.g., JSON results) when required for follow-up analytics
7.2 Prohibited / Restricted Inputs
Unless explicitly required and authorized by customer policy and the engagement scope, SplashBI will not send:
- Sensitive personal data (e.g., SSNs, financial account numbers, passwords)
- Raw record-level exports or unnecessary identifiers
- Credentials, secrets, or security tokens
7.3 Training and Model Improvement
- SplashBI does not use customer data to train or fine-tune foundation models.
- Any customer information processed by AI is used for inference only, consistent with contractual obligations and customer controls.
8. Public vs. Private AI Tools
- Customer data must not be entered into public/consumer AI services by personnel.
- Where AI services are used, SplashBI uses enterprise/private deployments under contractual and administrative controls.
9. Access Control and Least Privilege
- SplashBI enforces RBAC and least-privilege access for AI systems.
- Administrative privileges for AI/model hosting, configuration, and secrets are restricted to authorized roles.
- Segregation of duties is applied where feasible (e.g., engineering vs. security vs. operations).
- Access is reviewed periodically and revoked promptly upon role change or separation.
10. Transparency and User Disclosures
- SplashBI provides user-facing disclosures (e.g., in-product indicators and/or documentation) describing AI usage, intended purpose, and appropriate use guidance.
- Customers may be provided administrative controls to manage feature enablement and user access in alignment with customer governance.
11. Security Monitoring, Abuse Prevention, and Incident Response
SplashBI monitors AI services and supporting infrastructure for:
- anomalous access patterns
- suspicious prompt injection or manipulation attempts
- abnormal output patterns indicative of misuse
Security events follow SplashBI incident response procedures, including triage, containment, remediation, and customer notification obligations as applicable.
12. Testing, Validation, and Quality Controls
AI features are tested and validated prior to release and on material changes, including:
- functional testing (accuracy of query generation and output handling)
- regression testing for known scenarios
- negative testing for malformed/unexpected inputs
AI outputs are treated as probabilistic; where appropriate, safeguards are implemented such as:
- query governance rules
- validation checks
- user confirmations for risky actions
- logging and review workflows
13. Bias, Safety, and Responsible Use
- AI functionality is scoped to analytics assistance and query generation and is not intended for discriminatory decisioning.
- SplashBI maintains guardrails to reduce unsafe or irrelevant outputs and supports reporting/feedback mechanisms to improve quality and controls.
14. Software Supply Chain and Asset Integrity
To reduce the risk of introducing unsafe assets into AI systems:
- AI-related software components (models where applicable, SDKs, dependencies, agents/plugins/tools) must come from approved sources.
- Components are subject to vulnerability/dependency scanning prior to production use, and findings are remediated based on severity.
- Where provided by publishers, SplashBI verifies cryptographic hashes and/or digital signatures to validate integrity and authenticity before use.
15. Third-Party Risk Management
Third-party AI providers and supporting services must be evaluated for:
- security controls and contractual protections
- data handling terms
- auditability and incident response expectations
Contracts/terms must include security and confidentiality requirements aligned with customer obligations.
16. Logging, Auditability, and Retention
- AI service activity (requests, responses, and operational telemetry) may be logged for security and audit purposes.
- Logging is governed by least privilege, and retention is defined by internal policy and customer contractual requirements.
- Where feasible, sensitive data is minimized or excluded from logs.
17. Compliance
SplashBI’s AI controls are designed to support applicable privacy, security, and regulatory obligations through:
- tenant isolation
- RBAC / least privilege
- data minimization
- auditability and monitoring
- vendor governance
18. Policy Exceptions
Exceptions must be documented, risk-assessed, and approved by Security & Compliance leadership prior to implementation.