Skip to content

Services

Most BI problems share familiar patterns. Here is how I help untangle them.

Reporting trust, model change risk, dashboard performance, governance visibility, and review-controlled AI workflows are the main patterns I work on.

Power BI Databricks SQL and Python Azure Semantic models
Services overview

BI as production code

The same five steps, repeatedly.

Power BI engineering should flow through the same gates as application code: source control, review, validation, deploy, monitor. The methodology I ship against is deliberately ordinary; the discipline is in never skipping a step.

  1. Step 1

    Source

    PBIP, TMDL, and PBIR under Git. The model is plain text, not a binary file.

  2. Step 2

    Review

    Every change is a diff a reviewer can read line by line before it goes anywhere.

  3. Step 3

    Validate

    Pattern-based DAX risk checks and AI-assisted test drafting, with humans approving.

  4. Step 4

    Deploy

    Review-gated promotion. Nothing ships without an explicit approval step.

  5. Step 5

    Monitor

    Performance, freshness, and quality tracked as numbers, not adjectives.

Problems I solve

Common BI challenges, condensed.

Each section focuses on the problem, how I help, and the outcome teams need.

Reporting modernization

Problem

Legacy reports are fragile, hard to maintain, or built on platforms the team is moving away from. Stakeholders work around the reports instead of relying on them. Migration backlogs grow because no one has time to untangle the old logic and rebuild it properly.

How I help

  • Cognos-to-Power BI migration with structure cleanup, not just visual replication
  • Operational and management reporting in Power BI and Paginated Reports
  • Report architecture decisions: what to rebuild, what to retire, what to consolidate

Outcome

Reports that are structured for maintainability, not just delivery. Stakeholders use the reporting directly instead of exporting to spreadsheets. The modernization backlog shrinks because each report is rebuilt with a clearer model and fewer dependencies.

Good fit when

  • A platform migration is planned or stalled and reports are in scope
  • Existing reports are too fragile or complex for the current team to maintain confidently
  • Stakeholders are asking for new reporting but the foundation is not ready

Engagement shape

Duration
4 to 12 weeks
Team fit
Solo BI lead or a 2 to 5 person BI / analytics team
Investment
Scoped per engagement

Not a fit when

The underlying data platform itself is being rebuilt. Modernize reporting on stable foundations, not during a platform migration.

Power BI Paginated Reports SQL Azure

Semantic model engineering and validation

Problem

Semantic models are edited directly in Power BI Desktop with no source control, no review step, and no way to catch measure-level regressions before they reach production. Changes are tested manually or not tested at all. As models grow, confidence in deploying changes drops.

How I help

  • PBIP, TMDL, and PBIR-aware working methods with source control integration
  • Measure validation workflows and DAX risk detection before deployment
  • Structured review gates that make models easier to inspect and evolve safely

Outcome

Models that can be reviewed, tested, and deployed with higher confidence. The team catches issues before stakeholders do. Model changes are traceable and reversible, which makes the pace of delivery more sustainable.

Good fit when

  • Model changes have caused production issues that were only caught after deployment
  • The team wants to adopt source control for Power BI but is unsure where to start
  • Measure logic is growing complex enough that manual testing is no longer reliable

Engagement shape

Duration
2 to 8 weeks
Team fit
BI team of 2 to 8 with at least one Power BI developer already in place
Investment
Scoped per engagement

Not a fit when

Power BI Desktop isn't yet the core authoring tool, or the team has no appetite for source control discipline. Validation workflows assume an authoring baseline to protect.

PBIP TMDL PBIR Python

Performance tuning and BI quality hardening

Problem

Dashboards load too slowly for daily use. Users abandon them or fall back to static exports. The underlying cause is usually a combination of dataset design issues, unoptimized SQL, and DAX patterns that do not scale, but without profiling, the team is guessing at what to fix.

How I help

  • Dataset redesign and SQL optimization targeting measured bottlenecks
  • DAX and query-path review for dashboard responsiveness
  • Quality-focused delivery for business-critical reporting that cannot afford to be slow or unreliable

Outcome

Dashboards that load fast enough to be used in meetings and daily workflows. Performance improvements that are measurable and specific, not vague claims of "optimization." The team understands what was slow and why, so the fixes hold.

Good fit when

  • A business-critical dashboard is too slow and the team has already tried the obvious fixes
  • A dataset has grown large enough that refresh times or query times are becoming a blocker
  • Stakeholders have stopped trusting the reports because of performance or reliability issues

Engagement shape

Duration
2 to 4 weeks
Team fit
Embedded with your BI lead or a small BI team
Investment
Scoped per engagement

Not a fit when

The underlying data model itself needs to be re-architected. Start with Semantic model engineering instead; performance tuning works best on a model that is worth tuning.

Power BI SQL DAX Azure Databricks

Data quality and governance visibility

Problem

Data quality issues are invisible until a stakeholder notices something wrong in a report. There is no consolidated view of where quality problems exist, how severe they are, or whether they are improving. Governance conversations happen without shared evidence, so priorities are hard to align.

How I help

  • Data quality scorecards and governed reporting views in Power BI
  • Stakeholder-facing visibility into issue tracking, severity, and resolution trends
  • Reporting designed for governance decision-making, not only technical completeness

Outcome

A shared view of data quality that governance stakeholders can actually use to prioritize. Issues are tracked visibly, not buried in tickets. Teams can show whether quality is improving over time, which builds trust and supports investment decisions.

Good fit when

  • Governance reviews lack a shared, data-backed view of quality across domains
  • Data quality issues surface only through ad-hoc complaints rather than structured monitoring
  • The team needs to demonstrate data quality posture to leadership or compliance stakeholders

Engagement shape

Duration
4 to 10 weeks for the initial scorecard; ongoing refinement separately
Team fit
Governance stakeholders plus a BI team of 2 to 6
Investment
Scoped per engagement

Not a fit when

A formal data catalog or quality platform is already the chosen investment. This work is interim visibility, not a replacement for Purview, Collibra, or similar platforms.

Power BI DAX Azure Databricks

Practical AI-assisted BI workflows

Problem

There is interest in using AI to improve BI workflows, but it is unclear where AI adds real value versus where it introduces risk. Teams worry about hallucinated outputs, ungoverned prompts, or AI-generated logic that no one reviews before it reaches production.

How I help

  • Proof-of-concept work with Azure OpenAI in analytics-adjacent workflows
  • AI-assisted scenario drafting and engineering support with human review gates
  • Honest scoping: identifying where AI helps, where it does not, and what guardrails are needed

Outcome

A clearer picture of where AI fits into the team's BI workflows and where it does not. POCs that are scoped honestly, with review controls intact. The team can make an informed decision about what to invest in further, without pressure from overstated vendor claims.

Good fit when

  • The team is evaluating AI for BI use cases and wants a grounded, review-first perspective
  • An AI-assisted workflow has been proposed but no one has scoped the governance or review requirements
  • Leadership is asking about AI in BI and the team needs a realistic assessment rather than a demo

Engagement shape

Duration
2 to 6 weeks for a POC
Team fit
Small team of 1 to 3 with a named review owner
Investment
Scoped per engagement

Not a fit when

The goal is a production AI feature with vendor-grade uptime and SLAs. This work is scoping and POC delivery, not production AI engineering.

Azure OpenAI Python Azure Databricks

Engagement

How I typically engage

Three common engagement shapes depending on scope and timeline.

Advisory review

A short engagement to assess the current state of reporting, models, or governance and recommend concrete next steps.

  • Review a semantic model for structural risks and validation gaps
  • Assess a dashboard performance problem and identify the likely root causes
  • Evaluate a modernization backlog and recommend a sequencing approach

Focused modernization sprint

Time-boxed delivery against a defined scope, typically a migration, a performance fix, or a governance visibility build.

  • Migrate a set of legacy reports to Power BI with structure cleanup
  • Tune a slow dataset and deliver measurable performance improvements
  • Build a data quality scorecard for governance stakeholders

Embedded delivery support

Join the team for a sustained period to deliver BI work alongside existing engineers, analysts, and stakeholders.

  • Hands-on Power BI delivery embedded within a data or analytics team
  • Ongoing semantic model engineering and validation workflow support
  • Cross-functional delivery across reporting, data engineering, and governance workstreams

Related Work

Case studies behind the services.

A short set of examples that show how these service areas look in practice.

Portfolio-safe validation workflow frame

Semantic Model Engineering

Power BI Automated Measure Testing with PBIP

Maintaining 10+ production Power BI datasets and reports alone made measure-level regressions easy to miss. Changes were reviewed manually, if at all, and confidence in deployment dropped as the models grew. The underlying issue: a `.pbix` opened and saved is a review black box — nothing compares the model before and after.

What this proves

Shows a recent workflow built from real production-maintenance needs — PBIP + TMDL + PBIR as the foundation for treating Power BI like production code — with public proof through a GitHub repo and a Mar 2026 speaking session.

semantic-model validation pbip
Read case study
Anonymized data quality scorecard frame

Governance Visibility

Power BI Data Quality Scorecard

Governance stakeholders needed an interim way to monitor data quality across critical digital-banking and card platforms before a permanent governance tool was ready.

What this proves

Shows BI delivery tied to governance trust, secure-environment constraints, and a maintainable handover path.

data-quality governance dashboard-design
Read case study
Portfolio-safe report modernization frame

Reporting Modernization

Reporting Modernization: Cognos to Power BI Paginated Reports

A Canadian retail client relied on 10+ legacy Cognos reports for sales and inventory tracking that were difficult to maintain and extend.

What this proves

Shows enterprise reporting modernization, migration experience, and comfort with operational reporting.

report-migration paginated-reports cognos-modernization
Read case study

Location

Bangkok-based, remote across Asia and globally.

Based in Bangkok, Thailand, and remote-friendly for most engagements. Onsite work in Bangkok is possible when a scope benefits from being in the room. Time zones across Asia and the UK are comfortable; US-overlap is manageable for short calls.

FAQ

Common questions before a first conversation.

Short, honest answers to the questions that usually come up before an engagement.

Do you work remotely, or only in Bangkok?

Based in Bangkok, Thailand, and remote-friendly across Asia and globally for short engagements. Onsite work in Bangkok is possible when a specific scope benefits from being in the room.

How long is a typical engagement?

Advisory reviews run 2 to 4 weeks. Focused modernization or performance sprints usually sit at 2 to 8 weeks. Embedded delivery is longer, typically 4 to 12 weeks. Scope and duration are agreed before the engagement starts.

Do you work under NDA?

Yes. Most engagements involve sensitive client material and run under standard confidentiality terms. The case studies on this site are anonymized to industry patterns. No client names, production data, or credentials are disclosed publicly.

How quickly can you start?

It depends on what is already committed. Send a brief through the contact form and a reply comes within 1 to 2 business days with a realistic start window, or an honest not-a-fit-right-now where that is the case.

Do you only take Power BI work?

Most engagements center on Power BI because that is where the depth is. SQL, Python, and Databricks show up when the reporting problem extends upstream. The work is not positioned as full data platform architecture or data engineering lead.

Can recruiters and hiring managers use the same contact form?

Yes. The inquiry-type dropdown on the contact form routes hiring conversations, consulting work, advisory or embedded support, speaking, and peer outreach separately.

What does a first conversation usually look like?

A 30-minute call to hear the problem, the constraint, and what has already been tried. The goal of the call is a clear yes, no, or scoped proposal. If it is not a fit I will say so and suggest someone else when I can.

Contact

If there is a reporting, model, or delivery problem to untangle, start there.

Start with the current friction point: slow dashboards, fragile semantic models, modernization backlog, governance visibility gaps, or AI workflow guardrails.