Thank you for sharing such an insightful point.
This really resonates, speaking from my experience as an annotator on crowdsourcing platforms. I also found that a genuine commitment to quality from fellow annotators can be quite rare.
This makes me curious about a few things:
1. What are some concrete examples of the "unintended consequences" you ran into?
2. When you initially considered outsourcing, what was the main benefit you were hoping for (e.g., speed, cost)?
3. On the flip side, what have been the biggest frustrations or challenges with the in-house approach?
Would love to hear your thoughts on any of these. Thanks!
1) RE: Unintended consequences -
It was usually some mix of willful or accidental misinterpretation of what we wanted. I can't go into details, but in many cases the annotators are really aiming for maximizing billable activities. In situations where there are some ambiguities, they would pick one interpretation and just go with it without really making the effort to verify. In some ways, I understand their perspective in the sense that they know their work is a commodity and would just do the minimally-viable job to get paid.
2) RE: Benefits of outsourcing -
The primary benefit was usually speed to get to a certain dataset scale. These vendor had existing pools of workers, which we can access immediately. There were potential cost-savings but it was never as good as we had projected. The quality of labeling would be less than ideal, which would trigger interventions to verify or improve annotations, which then adds to cost and complexity.
3) RE: In-house ops -
Essentially, moving things in-house doesn't magically solve the issues we had. It's a lot of work to recruit and organize data labeling teams. They are still subject to the same incentive-misalignment problems as outsourcing, but we obviously have a closer relationship with them and that seems to help. We try to communicate to them the importance of their work, especially early on, where their feedback and "feel" for the data is very valuable. And it's much much more expensive, but all things considered still the "right" approach in many cases. In some scenarios, we can amplify some of their work by using synthetic data generators etc.
That's a great insight, Paul. As someone who has been researching the data annotation space, your perspective really resonates.
I completely agree that the first-hand, contextual information you get from actual users is something an external firm can never replicate. It seems like the most effective and efficient way to spin the data flywheel at high velocity.
This leads me to a question I've been struggling to understand: If this approach is so powerful, why do you think even companies with the vast resources of Big Tech still rely on what seems to be a riskier path—using external human evaluators—instead of fully building this feedback loop in-house?
I feel like I'm missing a key piece of the puzzle. I would be very interested to hear if you have any thoughts on this.
That's the million-dollar question, and you've hit on the key puzzle piece.
I believe the answer lies in distinguishing between two different stages of AI development: "Foundational Model Training" vs. "Product-Specific Fine-Tuning."
1. Foundational Model Training (The Big Tech Approach):
To build a base model like GPT-4 or Gemini, you need an unimaginable amount of general, brute-force data. You need millions of images labeled "cat" or "dog," and billions of text examples. For this massive scale of generic data, using large, external teams of human evaluators is often the only feasible way. It's about quantity and breadth.
2. Product-Specific Fine-Tuning (The Markhub Approach):
However, once you have that foundational model, the goal changes. To make an AI truly useful for a specific product, you no longer need a million generic data points. You need a thousand high-context, high-quality data points that are specific to your workflow.
For example, an external evaluator can label a button as "a button." But only a real designer using Markhub can provide the critical feedback, "This button's corner radius (8px) is inconsistent with our design system (6px)." This is the kind of nuanced, proprietary data that creates real product value, and it can only be generated "in workflow."
So, I think Big Tech isn't wrong; they're just solving a different problem (building the foundational engine). We, as application-layer startups, have the unique opportunity to build on top of that engine and solve the "last mile" problem by capturing the high-context data that truly makes a product smart.
You're not missing a puzzle piece at all you've just identified the difference between building the engine and building the race car.
Thanks so much for that clear explanation—it really made me realize that while companies like Scale AI can thrive during the hype of the foundational-model race,
it’ll likely get tougher down the road.
If you don’t mind me asking, as someone on the front lines of AI product development, what challenges have you found to be even more difficult than data annotation?
Thank you for sharing such an insightful point. This really resonates, speaking from my experience as an annotator on crowdsourcing platforms. I also found that a genuine commitment to quality from fellow annotators can be quite rare.
This makes me curious about a few things:
1. What are some concrete examples of the "unintended consequences" you ran into?
2. When you initially considered outsourcing, what was the main benefit you were hoping for (e.g., speed, cost)?
3. On the flip side, what have been the biggest frustrations or challenges with the in-house approach?
Would love to hear your thoughts on any of these. Thanks!