A few years back, a client came to me with what sounded like a simple idea: build a used-car listing site for Canada. "Just like Kijiji but only for vehicles," he said. I've heard "just like [big site] but for [niche]" enough times to know it's never simple — and WheelsNearMe.ca was no exception. What started as a car listings board turned into a full multi-vendor marketplace with dealer onboarding, automated pricing tools, geolocation search, and eventually over 50,000 registered users.
I want to write honestly about how it went — the decisions that worked, the ones I'd make differently, and the technical details that actually mattered. If you're planning a similar project, I hope this saves you some of the pain I went through figuring it out.
The Initial Brief and Why It Grew
The original brief was straightforward: allow private sellers and small dealers to list vehicles, let buyers search by make, model, year, and location. A classic classifieds setup. But once we started mapping out the user journeys, it became clear that dealers had fundamentally different needs than private sellers.
Dealers wanted inventory management — the ability to upload 80 listings at once, update prices in bulk, and track which listings were getting views. Private sellers just wanted to post one car and be done with it. Treating both with the same interface would mean one group was always frustrated. So I proposed a proper vendor/seller distinction built into the data model from day one.
That single decision shaped everything that followed. It also added about six weeks to the initial timeline, which I now consider one of the best investments we made in that project.
Technology Decisions — Laravel on Google Cloud
I chose Laravel for the backend without much deliberation. I've built enough PHP applications to know what a well-structured Laravel codebase looks like at scale, and for a marketplace with complex business rules, Eloquent's relationship system and Laravel's policy/gate authorization make things significantly cleaner than rolling your own.
The hosting decision was more interesting. The client initially wanted shared hosting — they were comparing prices on cPanel plans. I had to explain, probably three separate times, why a marketplace shouldn't run on shared hosting. We eventually landed on Google Cloud Platform: Cloud Run for the application, Cloud SQL for MySQL, Cloud Storage for vehicle images, and Cloud CDN in front of it all.
This stack ended up handling 50K users without issue, but it was the architecture choices inside the Laravel app that made the difference. Let me walk through the ones that mattered most.
The Vendor Data Model
The database schema has a users table, a vendors table (one-to-one with users), and a listings table that belongs to a vendor. This seems obvious but it has real implications. Every permission check in the app runs through vendor context, not just user context. A user who is also a vendor has a vendor_id on their session and every admin action they take — publishing a listing, editing pricing, responding to enquiries — is scoped to that vendor.
This made the admin panel straightforward. When a dealer logs in, they only ever see their own inventory. When a platform admin logs in, they see everything across all vendors. The gate checks for this are defined once in VendorPolicy and used consistently across controllers.
One thing I'd do differently: I added the vendor approval workflow (new dealers need manual approval before their listings go live) in week three, retrofitting it into a system that hadn't accounted for it. If I'd thought through the full onboarding flow from the start, the state machine for vendor status would have been cleaner. Lesson learned.
Geolocation Search — Where It Got Interesting
The search feature required returning listings within a radius of the user's location. My first attempt used MySQL's ST_Distance_Sphere function on latitude/longitude columns. It worked, but it was slow — scanning the full listings table on each search because spatial indexes in MySQL are a bit tricky to set up correctly.
The fix was using a proper spatial index on a POINT column and switching to ST_Within with a bounding box pre-filter before the more expensive distance calculation. Query time dropped from an average of 800ms to under 80ms on a dataset of 300,000 listings. That's the kind of optimisation you can only find by profiling under realistic data volumes — which is why I always import or generate realistic test data before calling a feature "done."
Bulk Image Uploads and Cloud Storage
Dealers uploading 40 photos for 20 different car listings at once is not a small I/O problem. The naive approach — process all uploads synchronously in the request — would time out on any shared hosting and would make the user interface feel horrible even on a good server.
I moved image processing to a Queue worker. The dealer uploads files, they go directly to a temporary bucket on Cloud Storage, and a ProcessVehicleImages job queues up for each file. The job handles resizing (we generate three sizes: thumbnail, standard, and full), adds a watermark via Intervention Image, and moves the final versions to the production bucket. The user sees a "processing" state on their listing and it flips to "ready" once the queue worker finishes.
Total processing time for 40 images averages about 90 seconds on a single worker. Once the volume grew, I scaled the worker count horizontally on Cloud Run — that's one of the practical benefits of a containerised setup.
What the Client Didn't Expect to Care About
Performance. The client had never thought about page speed before they saw Google's Core Web Vitals report. Suddenly they cared very much. We spent about two weeks tightening the frontend: lazy loading images, reducing DOM size on the listing pages, fixing cumulative layout shift from the vehicle photo carousel, and implementing fragment caching for the search results page.
The biggest single win was moving the main vehicle listing image to eager load with a proper fetchpriority="high" attribute. LCP went from 4.2 seconds to 1.8 seconds on mobile. No code rewrite needed — just understanding what the browser prioritises.
Lessons I'm Taking Forward
First: define your user types before writing a single line of code. The vendor/buyer distinction in WheelsNearMe shaped everything — data model, routing, admin tooling, email notifications. Getting it right early saved weeks of painful refactoring later.
Second: don't process images synchronously. Queue them. Every time. It's not over-engineering; it's basic respect for the user's time and your server's resources.
Third: spatial queries need spatial indexes. Test with realistic data volumes before you claim a feature is ready.
Fourth: clients don't know what they want to care about until they see it. Build room into your timeline for things they'll discover after launch — because they will discover them, and they will want them fixed.
If you're working on a marketplace or multi-vendor platform and want to talk through the architecture, I'm happy to have that conversation. Get in touch here.