1. Document Control
| Field | Value |
|---|---|
| Title | Kiey AI Impact Assessment / Algorithmic Impact Assessment |
| Version | 2026-05-04 |
| Effective Date | May 4, 2026 |
| Document Owner | Kiey Holdings, Ltd. (Chief Executive Officer) |
| Document Custodian | Engineering Lead, Kiey Holdings, Ltd. |
| Classification | Internal — produced on request to regulators, customers, and auditors |
| Review Cadence | Annual, and on every material change to the AI system (model swap, new feature category, expansion to a new regulated jurisdiction, or new data source) |
| Cross-References | Kiey Terms of Service §§13.1–13.14, §14; Privacy Policy §12; Data Processing Addendum (https://kiey.com/dpa); Subprocessor List (https://kiey.com/subprocessors); Acceptable Use Policy §15 |
2. Executive Summary
Kiey operates a conversational diagnostic and vendor-matching assistant ("Kiey AI") used by Real Estate Team owners, agents, employees, and homeowner clients to triage residential home-services issues (HVAC, plumbing, electrical, appliance repair, landscaping, and analogous trades). Kiey AI takes natural-language descriptions of home-service problems, suggests safe quick fixes, and — when the user asks — surfaces a list of pre-vetted vendors from the user's Real Estate Team for the user to message. This assessment is prepared to satisfy emerging algorithmic-impact-assessment expectations under the Colorado AI Act (effective February 1, 2026), the EU AI Act, Quebec Law 25 automated-decision-making provisions, California's CPRA Automated Decisionmaking Technology (ADMT) regulations, and analogous US state laws. Headline conclusion: Kiey AI does not make or substantially inform any "consequential decision" about housing access, lending, insurance, employment, education, healthcare, or public benefits, and is not used for biometric identification, social scoring, predictive policing, or any other Annex III / EU AI Act Article 5 prohibited practice. Kiey AI is therefore not classified as a "high-risk" AI system under the Colorado AI Act or the EU AI Act. Kiey nonetheless documents the system, its risk controls, and its compliance posture here so the controls are testable and producible on demand.
3. AI System Description
System name. Kiey AI (the "Kiey AI Home Services Assistant").
Purpose. Conversational triage of residential home-service problems and routing to human vendors and human agents inside the Kiey platform.
Foundation model. Anthropic Claude Haiku 4.5 (model identifier claude-haiku-4-5-20251001), accessed through the Anthropic SDK with the ANTHROPIC_API_KEY environment credential. Anthropic, PBC is the model provider and acts as a sub-processor under Kiey's Data Processing Addendum.
Inputs. User-submitted free-text describing a home-services issue. Optional structured fields collected on chat creation: topic, issue_type (product or service), appliance_make, appliance_model. Image vision is supported through the same model when the user attaches a photo to the chat. No biometric identifiers, no payment-card data, and no protected health information are collected — Kiey ToS §13.5 forbids users from submitting such inputs.
Outputs. Natural-language responses up to 400 output tokens per turn (150 in earlier configurations; current cap is 400 to support image-vision diagnoses), restricted by system-prompt rules to 1–2 sentence replies for simple cases and up to four sentences for quick-fix instructions. The system also emits internal control tags (e.g., [CATEGORY:xxx], [CONNECT_USER], [MAKE:...], [MODEL:...]) that are stripped before display and are used to update Firestore state.
Pipeline.
ai_warmup.js(GET /v1/ai/warmup) pre-warms the singleton Anthropic SDK client to reduce cold-start latency.ai_chat.js(POST /v1/ai/chat) creates the AI chat. It callsresolveCategoryFromTopic()andfindMatchingVendors()invendor_matching.jsto map the free-text topic to one of approximately 239 canonical service categories and to pull the active matching vendors out of the Real Estate Team's vendor pool.ai_process_message.js(called fromsend_text.jswhenever a user posts to an AI chat) sends the conversation to Claude Haiku 4.5, post-processes the reply through a sanitizer that strips banned and unsafe phrases, and writes the assistant message to Firestore.- Vendor handoff is initiated by the user — never by the model alone. The user taps "Message a Pro" in the iOS or web client, which sends a
[CONNECT_ME_TO_VENDOR:{vendorId}:{vendorName}]token; the backend creates a new human-to-human chat and pushes a notification to the selected vendor. aiVendorEscalation.jsruns on a five-minute schedule and, after thirty minutes of vendor non-response, posts a follow-up message in the handoff chat and notifies the agent.trackVendorResponse.jsis a Firestore trigger that flipsvendor_responded: trueon the AI context as soon as the vendor sends a real message, so the escalation never fires after a real human response.
System-prompt enforced flow. Understand → one quick fix → explain the underlying problem → offer to connect to a vendor. The system prompt (defined in buildSystemPrompt() in ai_process_message.js) hard-blocks claims that the model itself can connect users, notify agents, or send messages — only the human user, by tapping a button, can trigger a handoff.
Vendor-matching architecture. The vendor_matching.js module maintains: (a) a list of approximately 239 canonical service categories; (b) role aliases (e.g., plumber → plumbing, roofer → roofing); (c) phrase aliases (e.g., locked out → locksmith, no hot water → water heater installation); (d) keyword tokens; and (e) a bidirectional related-category graph (e.g., hvac ↔ heating ↔ air conditioning). findMatchingVendors() produces a list of vendor owner_id strings sorted by match score, where 10 indicates a direct category match and 5 indicates an alias-resolved match. Vendors that are inactive, cancelled, deleted, or disabled by the Real Estate Team owner are filtered out before matching. A second-stage AI validation pass (validateVendorMatches()) sends the candidate list back to the model with a constrained "comma-separated numbers only, or 'none'" prompt to strip false-positive matches before they are stored.
4. Intended Use and Users
Intended users.
- Real Estate Team Owners and the Agents and Employees they invite (B2B users).
- Homeowner Clients invited by an Agent into a Real Estate Team's market.
- Vendors invited by a Real Estate Team into the team's vendor pool — Vendors are recipients of handoffs, not direct AI users.
Intended use. Convenience-grade diagnostic triage of residential home-services problems and routing to human professionals. The output is informational only and is not professional advice (Kiey ToS §13.1). Every AI message is flagged in Firestore with is_ai_message, is_ai_generated, and where applicable ai_action, and is rendered with AI-disclosure indicators in the iOS, Android, and web clients (Kiey ToS §13.9).
Out-of-scope and prohibited uses. Kiey AI must not be used, and Kiey ToS §13.10 contractually prohibits use, for: housing access decisions; lending; insurance underwriting or pricing; employment, hiring, or termination decisions; education-access decisions; healthcare decisions; public-benefits eligibility; law-enforcement determinations; tenant or buyer ranking; biometric identification; deepfakes, voice clones, or impersonation; scraping or surveillance; any "high-risk" AI use under EU AI Act Annex III without independent assessment; and any practice prohibited under EU AI Act Article 5.
5. Risk Analysis
(a) Inaccurate or hallucinated advice
Risk. Foundation-model outputs may be wrong, outdated, or fabricated, and a homeowner could act on bad advice and damage property or themselves. Mitigations. The system prompt (buildSystemPrompt()) prohibits suggesting any fix that requires unplugging, removing panels, opening covers, accessing internal parts, or using tools. Quick fixes are restricted to "bare hands, under two minutes, no tools." Output is post-processed through a BANNED_PHRASES and UNSAFE_FIX_PHRASES filter that strips sentences such as "unplug the…", "remove the panel," and "take it apart." For any safety hazard (gas smell, carbon monoxide, fire, flooding near electrical, exposed wires), the prompt instructs the model to respond "Call 911 immediately" with no fixes. Output length is capped at 400 tokens per turn. Kiey ToS §§13.1–13.4 disclose that AI output can be inaccurate, that users must verify before acting, and that Kiey AI is not for emergencies.
(b) Bias in vendor matching
Risk. A matching algorithm could systematically advantage or disadvantage some vendors against protected characteristics, raising fair-housing and consumer-protection concerns. Mitigations. findMatchingVendors() operates only on the vendor's declared service category, role aliases, and category aliases. It does not consume — and the vendor record does not surface to the matching function — any protected characteristic (race, ethnicity, national origin, sex, sexual orientation, gender identity, religion, age, disability, familial status, source of income, or marital status). The vendor pool itself is curated by each Real Estate Team and is not subject to algorithmic ranking by Kiey beyond category-relevance score. Kiey ToS §13.7 discloses that vendor matches are heuristic and not ranked by vendor quality. Kiey ToS §13.10(a)–(b) prohibits any use of vendor matching to evaluate or rank prospective tenants, buyers, or homeowners.
(c) Privacy and sensitive data
Risk. A user could paste medical, financial, or third-party personal data into the AI chat, or the model provider could retain inputs in ways inconsistent with Kiey's privacy commitments. Mitigations. Kiey ToS §13.5 contractually forbids users from submitting protected health information, payment-card data, non-public personal information of others, third-party confidential information, or trade secrets. User content is licensed to Kiey under ToS §14 for the limited purposes there enumerated. Anthropic, PBC processes inputs as a sub-processor under Kiey's Data Processing Addendum (https://kiey.com/dpa) and is listed at https://kiey.com/subprocessors. Per Anthropic's Commercial Terms in effect at the time of deployment, inputs and outputs sent to Anthropic via API are not used to train Anthropic's models; Kiey will reverify this on each annual review and on each subprocessor change. The system-prompt sanitizer (escapePromptVar()) collapses newlines, neutralizes bracket and brace tags, and caps user-controlled fields at 200 characters before any field is interpolated into the system prompt, defending against prompt-injection.
(d) Manipulation, abuse, and rate exhaustion
Risk. Users could attempt to drive runaway costs, manipulate the model into producing unsafe output, or spam vendors with handoffs. Mitigations. A per-topic message ceiling warns at fifteen and auto-prompts a vendor connect at twenty messages. A daily topic ceiling of ten is implemented and currently disabled (TOPIC_LIMIT_ENABLED = false in ai_chat.js); it can be enabled via configuration without a code change. A concurrent-processing guard via the processing_since field on each ai_contexts document prevents two simultaneous turns from racing on the same conversation. Vendor non-response triggers a thirty-minute escalation to the user, the vendor, and the user's invited agent (aiVendorEscalation.js). The Acceptable Use Policy (Kiey ToS §15) prohibits manipulation, scraping, and abuse, and is enforceable via account suspension.
(e) Lack of human oversight
Risk. Solely automated decision-making is restricted under GDPR Article 22, Quebec Law 25, and the California ADMT regulations. Mitigations. Kiey AI cannot transact, dispatch, charge, or take any action with real-world consequences. Every economically meaningful step is gated on a human user's explicit tap. The model does not select a vendor on the user's behalf — it surfaces a vendor picker, and the user picks. The model cannot connect chats; the system prompt is hardened against any phrasing that suggests it can. When no vendors match in the user's market, the path is [CONNECT_ME_TO_VENDOR::agent], a fallback that opens a chat with a human agent. All AI-generated messages carry is_ai_message, is_ai_generated, and (where relevant) ai_action flags so client UIs can label them as AI output and so audit logs are unambiguous.
(f) Cross-border data transfer
Risk. International transfers of personal data may require specific safeguards under GDPR, the UK GDPR, Quebec Law 25, and other regimes. Mitigations. Cross-border transfer terms, including Standard Contractual Clauses where applicable, are documented in Kiey's DPA at https://kiey.com/dpa. Kiey is operated for the US market today; future EU expansion will require an updated DPA, an EU representative, and re-evaluation of this assessment under the EU AI Act and EU GDPR.
(g) Children
Risk. Children's data is given special protection under COPPA, GDPR, Quebec Law 25, the EU AI Act, and similar laws. Mitigations. Kiey's services are not directed to children. Kiey ToS §17.8 prohibits use of the service by, and prohibits submission of personal data of, minors under thirteen (and under sixteen where required by local law). Sign-up flows do not solicit child-user data. A confirmed report of a child user triggers account termination and data deletion under the Privacy Policy.
6. Compliance Mapping
The following table maps each major statute or regulatory framework to the Kiey controls that satisfy or address it.
| Statute / Framework | Kiey Control |
|---|---|
| Colorado AI Act ("CAIA," eff. Feb 1, 2026) | Kiey AI is not used for "consequential decisions" as defined by §6-1-1701(3). ToS §13.10 contractually prohibits such use. Disclosure of AI use is provided in chat headers and at onboarding (ToS §13.9). Opt-out from training-use available via support@kiey.com (ToS §13.12). This document is the algorithmic impact assessment of record. |
| EU AI Act (Regulation (EU) 2024/1689) | None of EU AI Act Article 5's prohibited practices are present (no social scoring, no exploitative manipulation, no real-time biometric identification, no untargeted facial-image scraping). Kiey AI is not on the Annex III high-risk list as currently used. General-Purpose AI Model obligations rest with Anthropic, PBC; Kiey relies on Anthropic's GPAI compliance documentation. ToS §13.10(g)–(i) contractually prohibits any deployer use that would push Kiey AI into Annex III or Article 5 territory. EU expansion would trigger a fresh assessment. |
| Quebec Law 25 — Automated Decision Making (s. 12.1) | Kiey AI is not used for solely automated decisions producing legal or similarly significant effects. Human review is always available because a human vendor or agent is always at the other end of any consequential step. Information about the AI is available on request; opt-out is available under ToS §13.12. |
| California CCPA / CPRA — ADMT regulations | No profiling for consequential decisions. Right to opt out of automated decision-making and profiling for training purposes is published in ToS §13.12 and exercised via support@kiey.com. |
| Utah Artificial Intelligence Policy Act | AI-disclosure indicators are present on first AI interaction in every client (ToS §13.9). |
| NYC Local Law 144 (Automated Employment Decision Tools) | Kiey AI is never used in employment decisions. ToS §13.10(a) prohibits this use. No bias audit obligations apply. |
| Illinois BIPA (740 ILCS 14/) and HB-3773 | Kiey AI does not collect, derive, or store biometric identifiers or biometric information. ToS §13.10(d) prohibits biometric processing without separate compliance. |
| Tennessee ELVIS Act | Kiey AI does not produce voice clones, voice synthesis, or likeness impersonation. ToS §13.10(c) prohibits this use. |
| Federal Fair Housing Act, ECOA, FCRA | Kiey AI does not score, rank, or evaluate persons for housing access, credit, or insurance. ToS §13.10(a)–(b) prohibits this use. |
| Canada AIDA (upon entry into force) | Kiey AI is unlikely to be a "high-impact system" as that term will be defined in regulation. Kiey will reassess on AIDA's effective date. |
| Washington My Health My Data Act | Kiey AI does not solicit or process health data. ToS §13.5 prohibits such inputs. |
7. Data Flow and Retention
Inputs. User text from the iOS, Android, or web client is posted to POST /v1/chats/send_text. When the chat is an AI chat (is_ai_chat: true), send_text.js calls processAIMessage() in ai_process_message.js, which calls Anthropic's chat-completions API with the system prompt, the conversation history, and the user message.
Outputs. The model's reply is post-processed by sanitizeAIResponse(), then written to Firestore at tenants/{tenantId}/chats/{chatId}/messages/ with the flags is_ai_message: true and is_ai_generated: true. Last-message snapshots are written to the chat document for inbox views.
AI conversation state. Per-chat state is stored at tenants/{tenantId}/ai_contexts/{chatId} and includes user_id, topic, identified_service_category, matched_vendor_ids, matched_vendor_scores, appliance_make, appliance_model, issue_type, problem_description, quick_fix_given, quick_fix_result, status, ai_status, vendor_routed_to, vendor_ticket_chat_id, user_message_count, processing_since, vendor_escalated, vendor_escalated_at, vendor_responded, vendor_responded_at, created_date, and updated_at.
Sub-processor processing. Anthropic, PBC processes inputs and produces outputs. Per Anthropic's Commercial Terms at the time of deployment, inputs and outputs are not used to train Anthropic's models. Kiey reverifies this on each annual review of this assessment and at every subprocessor list update at https://kiey.com/subprocessors.
Retention. Kiey retains conversational data for the duration of the user's account and for the deletion-grace period defined in the Privacy Policy §12 and the Data Processing Addendum §11. Real Estate Teams may export or delete tenant data through Kiey's data-export and account-deletion flows. Training opt-out per ToS §13.12 is forward-looking and does not retroactively remove data already used for training; this is disclosed in §13.12 itself.
8. Human Oversight
Kiey AI is structured so a human is always available to take over any economically meaningful step.
- AI is non-binding. Output is informational; the user decides whether to act.
- Vendor handoff is human-gated. The handoff to a human Vendor only happens when the user explicitly taps a vendor in the picker, which fires the
[CONNECT_ME_TO_VENDOR:{vendorId}:{vendorName}]token to the backend. - Agent fallback. When no Vendor in the user's market matches the resolved category, the system surfaces an agent fallback (
[CONNECT_ME_TO_VENDOR::agent]) that opens a chat with the human Agent who invited the user. - Vendor escalation. If the connected Vendor does not respond within thirty minutes,
aiVendorEscalation.jsposts a follow-up message and pushes a notification to the user, the Vendor, and the user's invited Agent so a human can intervene. - Disclosure. Every AI message is flagged at the data layer (
is_ai_message,is_ai_generated,ai_action,is_escalation) and rendered with AI-disclosure indicators in the client UI. - Right to opt out. Per ToS §13.12, Real Estate Teams may opt out of training use for their content by emailing support@kiey.com. This serves as the opt-out under CCPA/CPRA, GDPR Article 22, and Quebec Law 25 ADMT for training purposes.
9. Monitoring and Update Process
- Per-request token usage (input and output tokens) is logged on every Anthropic call for cost and abuse monitoring.
- Per-topic message ceilings are enforced in
ai_process_message.js: warn at fifteen messages, auto-offer connect at twenty. - Daily topic ceiling of ten is implemented and toggleable (
TOPIC_LIMIT_ENABLEDinai_chat.js). - Cost monitoring is performed against the Anthropic dashboard; alarms are configured on monthly spend.
- System-prompt evolution is tracked through ordinary version control and code review; any change to
buildSystemPrompt()requires a pull-request review. - Annual review. This document is reviewed every year and on each material AI-feature change (model swap, new feature category, addition of an AI surface to a new role, or expansion to a new regulated jurisdiction).
- Responsible disclosure. Security and AI-safety reports go to support@kiey.com. Kiey commits to acknowledge within five business days and to coordinate disclosure with the reporter.
- Regulator cooperation. Per ToS §13.13, Kiey will reasonably cooperate with valid AI regulatory inquiries, audits, and information requests, and will produce this document on request.
10. Sign-Off
Approved by Kiey Holdings, Ltd. leadership on May 4, 2026. Next scheduled review: May 4, 2027 (or earlier on material change to the AI features).
Document version: 2026-05-04. Earlier versions available on request to support@kiey.com.
This document describes Kiey Holdings, Ltd.'s governance and risk management for AI features in the Service. It is published for transparency and to support customer and regulatory due diligence. It does not modify Kiey's Terms of Service, Privacy Policy, or Data Processing Addendum, each of which controls in case of conflict. Comments and questions: support@kiey.com.