UX/UI designer @ Denso
AI Desk Research Tool
BACKGROUND
Aligning teams through shared content
To support DENSO’s global research strategy, I designed an interface that connects directly to SnowOwl. The goal was to replace fragmented, manual workflows with a central hub that allowed employees to:
Gather articles by category and topic from reliable secondary sources
Share selected articles with stakeholders for review and approval
Generate structured newsletters tailored to the company’s strategy
By combining automation with collaboration, the system turned scattered research efforts into one shared, scalable process.
PROCESS
PROBLEM
Teams needed reliable updates on market, business, and consumer shifts. But without a central tool, each team ran their own process—scraping sources, storing files in personal folders, and assembling newsletters manually.
This created repeated friction which resulted more noise, less clarity
“Files lived in SharePoint, but nothing was searchable. Important updates often slipped by.”
Hard to find
what mattered
“As more sources were added, the process broke down. We couldn’t keep up with the volume.”
Not built
to scale
“Without clear sourcing, it was hard to trust whether an article was accurate or up to date.”
Uncertain credibility
“Scraping and formatting articles took up entire days. It pulled time away from actual analysis.”
Too much
manual work
AUTOMATION SET UP
Re-check publish date
Confirm relevancy and accuracy
Ensure credibility of sources
Article scrapeability
Share to the stakeholders for selection
receive back and generate
Users needed an interface that supported both automation and manual verification.
While SnowOwl automate scraping and summarizing,
We still had to manually:
RESEARCH
i studied how firms like Accenture position AI agents as structured systems for sourcing, validating, and scaling knowledge. those references set the bar for credibility and governance.
our team’s own experiments with agentic ai made the gaps clear.
scraping worked, but paywalls broke flows. summaries came fast, but often missed context.
this mix of outside benchmarks and inside trials shaped the interface: ai handles the repetitive lift, people keep control of accuracy and focus.
DESIGN
Based on these results, created an interface that
Show Trusted sources
Select and add credible external sources (e.g., McKinsey, BCG) by category and topic
Article previews
Verify publish dates, relevance, and accuracy before including articles
Seamless collaboration
Share selected articles through the application for feedback and approval
AI-assisted summaries
Use theme-based prompts to generate summaries aligned with company strategy
AI with context
Generate summaries with customizable prompts, aligning insights to company interests
Scalable output
Assemble newsletters for different teams or topics, with consistent formatting and export to SharePoint
the interface balances automation with trust. ai scrapes articles, generates draft summaries, and organizes them into previews. each preview shows source, date, and metadata up front—so teams don’t waste time checking links later.
humans guide the rest. with one click, they can confirm, comment, or swap out content. contextual prompts ensure ai outputs align with strategy, not just surface-level summaries.
the result is a repeatable flow: credible sources in, structured insights out. faster than manual curation, but still human-centered.
WHAT I LEARNED
most importantly, i saw how small design choices like article previews, contextual prompts, and export consistency can turn abstract ai capabilities into something teams can actually rely on every week