My Experience Building AI Agents With Manus and Genspark
People keep saying 2025 is the "year of AI agents." You connect a LLM with some tools(like your mail or calendar) you create an AI agent, that can schedule a meeting all by itself, without needing constant re-prompting.
They promise to automate work and make our lives easier.
At Labellerr, where we focus on making data annotation easier, I saw a perfect job for one of these agents.
I spend a lot of time manually checking different websites to find people or companies asking for help with data labeling or annotation.
It's important work, but it takes hours! I thought, "This is exactly what an AI agent should do!"
So, I decided to build one. My goal was simple: "create an agent that could automatically check social media and notify me."
My journey trying to build this seemingly straightforward agent showed me exactly why these powerful AI tools aren't quite ready to take over the world just yet.
The Problem I Needed to Solve
Posts Example
Manually checking LinkedIn, Twitter, and Reddit every day is time-consuming. People post requests for data annotation help, ask about tools, or look for services, but finding these posts is like searching for needles in a haystack.
I wanted an AI agent to automate this. Ideally, the agent would:
- Check LinkedIn, Twitter, and Reddit every hour.
- Look for new posts containing keywords like "need data labeling help," "data annotation services," "looking for annotation tools," etc.
- If it found a relevant post, it would immediately send me an alert, maybe through Slack or email.
This seemed like a perfect task for an AI agent – repetitive searching and simple notification.
Attempt 1: Building It Myself From Scratch
Local Agent Development
My first thought was to build it locally.
So I planned a setup: a simple React frontend to control it and a Python backend to do the searching.
For searching, I'd use libraries like BeautifulSoup (for general web scraping, though tricky on dynamic sites), PRAW (for Reddit), and Tweepy (for Twitter).
I hit roadblocks almost immediately:
- LinkedIn Login Hell: Getting the agent to log into LinkedIn automatically was a nightmare. Their security (using OAuth2) kept blocking my attempts, throwing errors like "403 Forbidden." I couldn't get reliable access.
- Reddit Said 'Stop!': My Reddit searching script worked for about five minutes, then got blocked. Reddit has "rate limits" – rules about how often an automated tool can check for new posts. My agent was checking too frequently.
- Twitter's Price Tag: Twitter's API (the official way for programs to interact with it) had changed. To get the kind of access I needed just to read posts reliably cost a shocking $5,000 per month! That was far too expensive for this project.
Building from scratch was proving difficult, mainly because the platforms themselves make it hard for automated tools to get the data.
Attempt 2: Trying the Manus AI Agent Platform
Manus Agent
Okay, building from scratch was tough. So, I looked at existing "general purpose" AI agent platforms. Manus AI caught my eye.
It promised a "no-code social media scraper" and claimed it could "auto-deploy" the agent to run online easily (using AWS Lambda). Sounds perfect, right?
The reality was quite different:
- Confusing 'No-Code': The user interface wasn't exactly drag-and-drop. Setting up the agent involved writing instructions that felt a lot like complicated configuration files (pseudo-YAML). It wasn't intuitive.
- Broken Links and Hallucinations: When I tried to deploy the agent, the process seemed to work, but the web links Manus AI generated for the running agent were broken. They didn't lead anywhere. It felt like the AI had just hallucinated the deployment details.
- Unexpected Costs: Even if it had worked, Manus AI estimated the cost for this basic monitoring would be around $300 per month. That still felt high for such a simple task.
Manus AI promised simplicity but delivered complexity, broken results, and significant cost.
You can view the replay of this.
Attempt 3: Giving GenSpark AI a Shot
Genspark Agent
I wasn't ready to give up yet. I heard about GenSpark AI, positioning itself as a more advanced, "autonomous multi-agent system" with lots of integrations supposedly built-in. Maybe this newer generation of agent could handle it?
Here’s what happened with GenSpark:
GenSpark built a very sleek, professional-looking user interface for my monitoring agent. It looked great!
But when I ran it, the examples it showed me – posts supposedly asking for data annotation help on Reddit and LinkedIn – were completely fake.
It was using placeholder dummy data, not real-time information.
GenSpark looked promising on the surface but failed on the most critical parts: getting real data and handling real logins. It hallucinated key functionalities.
You can watch the chat here.
Lessons Learned from My Agent-Building Journey
Trying these different approaches taught me some valuable lessons about the state of AI agents in 2025:
APIs Are Still the Biggest Hurdle
The main bottleneck isn't always the AI's intelligence; it's getting reliable access to the data sources.
Social media platforms, in particular, actively limit automated access through complex logins (OAuth), strict rate limits, and sometimes very high API costs.
Even advanced agents like GenSpark struggled to overcome these real-world barriers.
Agents Overpromise on Integration (Hallucination vs. Execution)
Many agent platforms claim they can easily connect to various services. However, they often fail when it comes to the tricky details of actual authentication, error handling, and adapting to platform-specific rules.
They frequently hallucinate that they have capabilities they don't possess.
Cost vs. Real Value
Right now, building and running an AI agent that is truly reliable for tasks involving restricted external platforms can often cost more (in development time, debugging, and potential service fees) than simply doing the task manually or with less "autonomous" tools.
Conclusion
We are definitely making exciting progress with AI agents. The idea of AI handling complex, multi-step tasks autonomously is closer than ever.
However, my attempt to build a relatively simple social media monitor showed that we're not quite there yet.
Agents still struggle significantly with real-world integration, especially navigating the complex and often restrictive rules of external platforms and APIs.
They hallucinate functionality and often fail to handle errors robustly.
Until AI agents can reliably manage authentication, respect rate limits, avoid making things up, and navigate the specific quirks of each platform they interact with, human oversight and intervention (human-in-the-loop systems) remain absolutely essential.
The "year of the agent" is exciting, but the fully autonomous future still requires more development.
PS: If you’ve actually built an agent that reliably solves this social media monitoring problem for data annotation leads, please message me – I would genuinely love to beta test it! 🔥
Will you take over the world, AI? Yes, I will.
byu/kevivm inmemes
FAQs
Q1: What is GenSpark?
GenSpark is an AI-powered platform that utilizes multiple specialized agents to provide comprehensive search results and assist with various tasks, aiming to revolutionize information retrieval.
Q2: How does GenSpark differ from traditional search engines?
Unlike traditional search engines that list links, GenSpark generates custom pages called Sparkpages in real-time, offering synthesized and personalized information.
Q3: What are the limitations of GenSpark?
While GenSpark offers advanced features, it can experience longer response times for deep research tasks and may have limited customization options for users.