Core API

The core module provides the main user-facing API for LayoutLens.

LayoutLens Class

class layoutlens.LayoutLens(api_key: str | None = None, model: str = 'gpt-4o-mini', provider: str = 'openrouter', output_dir: str = 'layoutlens_output', cache_enabled: bool = True, cache_type: str = 'memory', cache_ttl: int = 3600)[source]

Bases: object

Simple API for AI-powered UI testing with natural language.

This class provides an intuitive interface for analyzing websites and screenshots using natural language queries, designed for developer workflows and CI/CD integration.

Examples

>>> lens = LayoutLens(api_key="sk-...")
>>> result = lens.analyze("https://example.com", "Is the navigation clearly visible?")
>>> print(result.answer)
>>> # Compare two designs
>>> result = lens.compare(
...     ["before.png", "after.png"],
...     "Are these layouts consistent?"
... )
__init__(api_key: str | None = None, model: str = 'gpt-4o-mini', provider: str = 'openrouter', output_dir: str = 'layoutlens_output', cache_enabled: bool = True, cache_type: str = 'memory', cache_ttl: int = 3600)[source]

Initialize LayoutLens with AI provider credentials.

Parameters:
  • api_key (str, optional) – API key for the provider. If not provided, will try OPENAI_API_KEY or OPENROUTER_API_KEY env vars

  • model (str, default "gpt-4o-mini") – Model to use for analysis (provider-specific naming)

  • provider (str, default "openrouter") – AI provider to use (“openrouter”, “openai”, “anthropic”, “google”)

  • output_dir (str, default "layoutlens_output") – Directory for storing screenshots and results

  • cache_enabled (bool, default True) – Whether to enable result caching for performance

  • cache_type (str, default "memory") – Type of cache backend: “memory” or “file”

  • cache_ttl (int, default 3600) – Cache time-to-live in seconds (1 hour default)

analyze(source: str | Path, query: str, viewport: str = 'desktop', context: dict[str, Any] | None = None) AnalysisResult[source]

Analyze a URL or screenshot with a natural language query.

Parameters:
  • source (str or Path) – URL to analyze or path to screenshot image

  • query (str) – Natural language question about the UI

  • viewport (str, default "desktop") – Viewport size for URL capture (“desktop”, “mobile”, “tablet”)

  • context (dict, optional) – Additional context for analysis (user_type, browser, etc.)

Returns:

Detailed analysis with answer, confidence, and reasoning

Return type:

AnalysisResult

Examples

>>> result = lens.analyze("https://github.com", "Is the search bar easy to find?")
>>> result = lens.analyze("screenshot.png", "Are the buttons large enough for mobile?")
compare(sources: list[str | Path], query: str = 'Are these layouts consistent?', viewport: str = 'desktop', context: dict[str, Any] | None = None) ComparisonResult[source]

Compare multiple URLs or screenshots.

Parameters:
  • sources (list[str or Path]) – List of URLs or screenshot paths to compare

  • query (str, default "Are these layouts consistent?") – Natural language question for comparison

  • viewport (str, default "desktop") – Viewport size for URL captures

  • context (dict, optional) – Additional context for analysis

Returns:

Comparison analysis with overall assessment

Return type:

ComparisonResult

Examples

>>> result = lens.compare([
...     "https://mysite.com/before",
...     "https://mysite.com/after"
... ], "Did the redesign improve the user experience?")
analyze_batch(sources: list[str | Path], queries: list[str], viewport: str = 'desktop', context: dict[str, Any] | None = None) BatchResult[source]

Analyze multiple sources with multiple queries efficiently.

Parameters:
  • sources (list[str or Path]) – List of URLs or screenshot paths

  • queries (list[str]) – List of natural language queries

  • viewport (str, default "desktop") – Viewport size for URL captures

  • context (dict, optional) – Additional context for analysis

Returns:

Batch analysis results with aggregated metrics

Return type:

BatchResult

async analyze_async(source: str | Path, query: str, viewport: str = 'desktop', context: dict[str, Any] | None = None) AnalysisResult[source]

Async version of analyze method for concurrent processing.

Parameters:
  • source (str or Path) – URL or path to screenshot file

  • query (str) – Natural language query about the UI

  • viewport (str, default "desktop") – Viewport size for URL captures (“desktop”, “mobile_portrait”, etc.)

  • context (dict, optional) – Additional context for analysis

Returns:

Analysis result with answer, confidence, and metadata

Return type:

AnalysisResult

async analyze_batch_async(sources: list[str | Path], queries: list[str], viewport: str = 'desktop', context: dict[str, Any] | None = None, max_concurrent: int = 5) BatchResult[source]

Analyze multiple sources with multiple queries concurrently.

Parameters:
  • sources (list[str or Path]) – List of URLs or screenshot paths

  • queries (list[str]) – List of natural language queries

  • viewport (str, default "desktop") – Viewport size for URL captures

  • context (dict, optional) – Additional context for analysis

  • max_concurrent (int, default 5) – Maximum number of concurrent analyses

Returns:

Batch analysis results with aggregated metrics

Return type:

BatchResult

check_accessibility(source: str | Path, viewport: str = 'desktop') AnalysisResult[source]

Quick accessibility check with common WCAG queries.

check_mobile_friendly(source: str | Path) AnalysisResult[source]

Quick mobile responsiveness check.

check_conversion_optimization(source: str | Path, viewport: str = 'desktop') AnalysisResult[source]

Check for conversion-focused design elements.

get_cache_stats() dict[str, Any][source]

Get cache performance statistics.

clear_cache() None[source]

Clear all cached analysis results.

enable_cache() None[source]

Enable caching.

disable_cache() None[source]

Disable caching.

create_test_suite(name: str, description: str, test_cases: list[dict[str, Any]]) UITestSuite

Create a test suite from specifications.

Parameters:
  • name – Name of the test suite

  • description – Description of the test suite

  • test_cases – List of test case specifications

Returns:

UITestSuite object

run_test_suite(suite: UITestSuite, parallel: bool = False, max_workers: int = 4) list[UITestResult]

Run a test suite and return results.

Parameters:
  • suite – The test suite to run

  • parallel – Whether to run tests in parallel

  • max_workers – Maximum number of parallel workers

Returns:

List of UITestResult objects

Data Classes