This is the tale of how we went from operating with an incomplete picture across our own customer journey – losing users the moment they hit our login gateway, burning engineering cycles on data we should have had out of the box, and guessing at what was actually driving conversion – to having the full picture, reliably, across every touchpoint.
As a business that runs multiple collector brands under one roof, we needed a solution to solve our data gaps and dependencies, and our search led us to Quantum Metric.
Chapter 1 : Where the Map Ran Out
“The hardest part of an adventure is getting to the start point.” – Mark Beaumont
Why We Needed More Than Basic Web Analytics
Before this story began, our crew was already relying on a host of analytical products from Google Analytics, Heap, Firebase and Lightdash – tools that continue to serve us well. However, as Collectors scaled its digital offerings from PSA and PCGS into new products like Powerpacks and mobile apps, we kept running into the same ceiling: limited self-serve analysis, high engineering overhead for custom clickstream data, messy export structures, and – most critically – no reliable way to stitch a user’s journey across our domain transition.
After evaluating several paths, we decided Quantum Metric (QM) solved our critical needs beyond the core analytics capabilities of automatic capture of clicks, page loads, API calls, frustration signals – accurate tracking of behaviour across our multi-domain setup, being self-serve and connecting cleanly with our existing architecture. It offered two things that tipped the scales: a BigQuery connector for raw data export into our warehouse, and an Optimizely integration to extend QM metrics into our A/B testing workflows.
Another deciding factor was security. We needed to know that our customers’ data was locked away even from the vendor. QM uses client-side RSA 2048-bit encryption, where sensitive data is encrypted in the user’s browser and the private decryption key stays exclusively with us; this means QM physically cannot decrypt or view our sensitive user data. On top of that we use their AutoPII feature, an automated Data Loss Prevention (DLP) tool that proactively scans ingested session data to detect and flag potential PII exposure before it becomes a compliance issue. For a platform handling customer data at our scale, that architecture mattered.
Chapter 2: Exploring the Ocean floors
“It is little keys that open up big doors.” – Lamine Pearlheart
The Onboarding
Before taking to the oceans, we tested our boats on rivers – we started with PSA as a proof-of-concept, by setting up the QM pixel and SDK alongside engineers from PSA and Growth Marketing, then doing rigorous data validation with the QM team before committing to a wider rollout. Once we were confident in the data quality, we mapped our most-used dashboards from our previous platform, decided what to migrate and what to deprecate, and divided the migration work between us. QM’s accuracy and the variety of metrics tracked was a step up, providing us additional scope for insight generation.
The interesting decisions came in configuration and privacy. On the configuration side, we worked with QM’s solutions team to define the custom events, funnels, user attributes, and error captures that matched our actual KPIs – not just the platform defaults. On the privacy side, we walked through the live site to identify which customer PII needed encryption in session replays, masking sensitive data by default. At the same time, we retained limited, role-restricted access to be able to filter sessions by email so that we can debug errors/issues faced by specific users if necessary. It’s a balance between privacy and practicality that took deliberate thought to get right.
The QM team helped run targeted enablement sessions for each audience – Data, Growth & Marketing, Product, Engineering – along with dedicated ones for specific features like Event creation, Advanced funnels, Session Replays and User Roles. That made adoption noticeably smoother.
Chapter 3: Here Be Dragons
“A smooth sea never made a skillful mariner.” – Anonymous
The Hard Problems We Had to Solve from Scratch
Session Stitching
This was the big one. QM, like most analytics platforms, treats a new domain as a new session by default. But our registration and login flow routes users through a shared app.collectors.com gateway before landing them back on our branded properties. That intermediate hop was breaking session continuity for every single logged-in user – not an edge case, but the core of our authenticated traffic.
The solution required custom engineering: we worked with the QM team to append QM’s anonymous user ID and session ID to the URL as it passes through the sign-in flow. When the user lands on the other side, QM picks up those parameters and stitches the pre- and post-login activity into a single continuous session. It sounds simple in retrospect, but identifying that this was the right approach, and implementing it cleanly across our auth flow, took real collaboration on both sides.
The Optimizely Mismatch
QM’s native Optimizely connector is built for Optimizely’s “Optimize Web” plan (client-side experimentation) whereas we are supported by the “Optimize Feature Experimentation” plan, which is server-side. The two plans expose different feature variables. This meant there was a gap between what keys QM was looking for and what was captured in our data payload. There was no readily available mapping between the two sets of variables.
Solving this required pulling in engineers from both QM and Optimizely to design a custom solution identifying variables that can be mapped to what QM is searching for, including them in an API where QM could intercept and capture – a process that took several weeks of back-and-forth. The end result works well, and we are now able to ensure our A/B test data from our websites is being ingested by QM. But the takeaway is that if you’re running server-side experimentation and evaluating QM, go in knowing this integration needs custom work. It’s solvable; it’s just not plug-and-play as of date.
Chapter 4: Setting Up Camp
“Every great endeavour is fueled by great enthusiasm.” – Lailah Gifty Akita
How We Actually Use It Day-to-Day
With the dragons behind us, we finally had reliable end-to-end funnel tracking and were able to begin the real work – monitoring our key customer journeys. Here’s how different teams are putting it to work:
Autocapture handles the heavy lifting. The majority of our tracking runs on QM’s Autocapture – no manual tagging required for user engagements (clicks, page loads, API calls), audience dimensions (device types, URL parameters), and even frustration signals (like rage clicks). This covers most of our “happy path” monitoring instantly, which means we’re not burning engineering cycles on instrumentation for every new flow.
Custom Events fill the gaps. For anything Autocapture or UI configuration can’t cover, we write custom JavaScript to create Custom Events. QM’s solutions engineers often collaborate with us on this, helping keep the complexity of writing and maintaining these scripts off our internal engineering backlog without sacrificing flexibility.

The Opportunities tab does triage for us. One of the most practically useful features is the Opportunities tab within our funnel dashboards. It doesn’t just show where users drop off, it highlights specific friction points – such as technical errors or API failures – and estimates the conversion impact of each. It’s become a standard first stop when we’re trying to prioritize which bugs or issues to investigate.

Session Replay gives us the “why.” The funnels and conversion metrics tell us something dropped – Session Replay tells us what actually happened. The Customer Care team uses it to understand the exact sequence of a user’s reported issue. The Product team uses it to investigate anomalies in the funnel numbers.

Defining Events with our own Session Replays. One genuinely useful trick we’ve landed on: if we want to track a specific user interaction but aren’t sure of the right CSS selectors or element IDs, we recreate that interaction ourselves, then pull up our own session using QM’s Chrome extension (Quantum Metric Visible) and create the event directly from the replay timeline. This allows us to “reverse engineer” our tracking – creating precise segments and events based on actual user interactions rather than guessing which elements or URL patterns to use. It’s a small thing, but it cuts the back-and-forth between analytics and engineering significantly.
Epilogue: The Journey Continues
“Man cannot discover new oceans unless he has the courage to lose sight of the shore.” – Andre Gide
The platform is live, the team is trained, and the data is flowing. By taking efforts to keep customers’ PII data private and safe, by leveraging QM’s unique offerings like Session Replays, UX Frustration signals to reduce friction and improve their digital journey, we strive to make collecting safe, fun and easy for customers.
The next chapter is making that data work harder across the organization to further improve upon Collectors’ mission. Here is where we’re most excited to go deeper:
Connecting the Front and Back End with BigQuery – Right now, we use the BigQuery integration to export raw QM engagement data into our warehouse. The real opportunity is joining that with our back-end transactional data to build a complete picture of the customer journey. This means better attribution, more precise engagement signals, and less guesswork about what front-end behavior actually drives conversion and lifetime value.
Scaling for New Services and Markets – While our core funnels are instrumented, Collectors is not standing still. We are constantly launching new submission services for our existing customer base and expanding our footprint internationally. Our analytics framework has to scale alongside that product growth to ensure we don’t lose visibility as our offerings become more complex.
Deepening Mobile App and Experimentation Analytics – Our native mobile app user base is growing rapidly, and we need to match our web analytics rigor on mobile. We are focusing on creating highly customized user attributes and native events within QM to give our product teams the granular, mobile-specific visibility they need to optimize the app experience. On the experimentation side, now that the Optimizely integration is stable, we’re looking at how QM metrics can feed more directly into our A/B test analysis – moving beyond standard conversion metrics and into behavioral signals that tell us not just whether a variant won, but why.
Security and Fraud Detection – This is perhaps the most interesting cross-functional shift. We are exploring with our internal security team to see how QM’s behavioral data can be used as an early warning system. It is a massive step to move from using analytics just for conversion optimization to using it for active fraud detection.
With a myriad of features and ideas to explore , we are looking forward to seeing what more we learn about our data and therefore our customers.
Chandni M Narayanan, Kevin Wang
Chandni brings close to 9 years of experience in the analytics domain. She is a strong proponent of shaping product and business strategy through data-driven decision making. At Collectors Universe, she has been a key contributor for PCGS, building foundational reporting, sharing actionable analyses and providing subject matter expertise on all things data. Outside of work, she enjoys reading fiction, writing poetry, painting, dancing, and exploring new experiences.
Kevin is an Analytics professional at Collectors with around 8 years of experience in the field. He works closely with the marketing and product teams as their primary data point of contact and subject matter expert, helping them uncover insights through data analysis and experimentation to make data-driven decisions. Outside of work, Kevin enjoys reading and traveling.


