Atlassian's policy permits the use of data generated through its services, potentially including Loom recordings and transcripts, to improve its products and develop AI-powered features.
This analysis describes what Loom's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
Use of user-generated video and text content to train or improve AI systems is a significant and evolving area of privacy concern, particularly where the content includes sensitive business or personal communications.
Interpretive note: The document HTML provided was heavily truncated; the specific language governing AI and product improvement use could not be directly extracted and this provision reflects Atlassian's publicly known policy structure rather than verbatim document text.
Your Loom videos and transcripts may contribute to Atlassian's AI product development unless your enterprise agreement restricts this; individual free-tier users have limited visibility into how their content is specifically used for AI improvement purposes.
How other platforms handle this
We are simplifying our Terms of Use, including clarifications around the use of AI tools, and their data use. We have moved the terms that describe AI Features, which were previously written for a Creator audience and located under the AI-Based Tools Supplemental Terms and Disclaimer, into the User ...
We may use machine learning and other artificial intelligence (AI) technologies ("AI Technologies") to provide and improve our Service. For example, we may use such AI Technologies to analyze and process your contributions and interactions to provide you with personalized experiences, content recomm...
We use Personal Data to detect and prevent fraud, and to develop and improve our fraud detection models and other machine learning systems. This may include using transaction data, device information, and other Personal Data to train and refine our systems.
Monitoring
Loom has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
1) REGULATORY LANDSCAPE: AI training on personal data engages GDPR Article 6 (lawful basis), Article 22 (automated decision-making), and the EU AI Act's provisions on high-risk AI systems and general-purpose AI models. The FTC has issued guidance on AI and data practices under its unfair and deceptive practices authority. CCPA/CPRA's 'sensitive personal information' provisions may apply where AI processing involves audio or video of private communications. 2) GOVERNANCE EXPOSURE: High. The use of customer-generated content including video recordings for AI model training without granular consent or opt-out mechanisms represents significant governance exposure, particularly post-GDPR enforcement trends. Enterprise customers may have contractual protections in their DPA but individual and SMB users may not. 3) JURISDICTION FLAGS: EU/EEA enforcement authorities (particularly the Irish DPC, which leads for Atlassian given its EU establishment) have scrutinized AI training on user data. California's CPRA creates a right to limit the use of sensitive personal information. UK ICO guidance on AI and data protection is also relevant for UK users. 4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers should review whether their Atlassian DPA contains explicit carve-outs preventing use of customer content for AI training. B2B contracts should specify whether Atlassian's AI features (such as Loom transcription, summaries, or Rovo AI integration) operate as separate data processors with independent consent requirements. 5) COMPLIANCE CONSIDERATIONS: Legal teams should assess whether existing user consent mechanisms satisfy requirements for AI training use under applicable law. Privacy notices may need updating to explicitly describe AI use cases. Organizations in regulated industries (financial services, healthcare, legal) should evaluate whether Loom AI features are compatible with their data handling obligations.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
Use of user-generated video and text content to train or improve AI systems is a significant and evolving area of privacy concern, particularly where the content includes sensitive business or personal communications.
Your Loom videos and transcripts may contribute to Atlassian's AI product development unless your enterprise agreement restricts this; individual free-tier users have limited visibility into how their content is specifically used for AI improvement purposes.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Loom.