author: Marcus A. Lee
published on: Dec. 8, 2024, 10:59 a.m.
tags: #AI
In this blog post, I will share tips on effective CrewAI Flow setup to achieve the best long format content creation results. I'll also walk through a Crew Flow project that convert notes into a well written blog content, using multiple crews, flow control, structured state management, and self evaluation loop.
It is common to see poor results when long format content is passed to LLMs. Many believe that a more advanced model is needed for better results, but often it is related to how the crew is set up and the specific focus given to the Crews optimize their output.
Joao Moura mentioned that OpenAI's o1 model has been noted to hallucinate more often compared to older models in regards to agentic behavior. The reason being is because older models allows agents to use tools where as o1 model uses its chain of thought processes in concluding answers before allowing agents to use any tools. The distinction in LLM choice, combined with crew flow techniques and setup, is what differentiates good results from poor ones in long-form content creation. For this reason, choosing gpt-4o-mini
over the o1
model can yield better results with a structured crew setup.
Note: I haven't come across data regarding
o1
models tend to hallucinate or get detailed more often thangpt-4o-mini
, however, Joao's explanation is logically sound.
Choice of words is important to keep agents focused on their intended tasks. For instance, if we have an agent researching content to hand off to a planning agent, we would set up the agent to research and produce a thorough report for the planning agent . See the expected_output
examples below:
Example 1:
expected_output: >
A thorough research report on {topic} with relevant information that can be turned into an educational content afterwards.
Example 2:
expected_output: >
A thorough research report on {topic} with relevant information that inspires educational content afterwards.
Example 2 uses the word inspire
instead of to be turned into
. Its a subtle adjustment that increases the likelihood of the research agent staying focused on gathering information rather than shifting towards creating the educational content.
Agents are great at taking in text and providing text back, making them suitable for many use cases. However, when specific outputs need to follow a defined structure—like keys, values, and types—structured structured state management comes into play. By defining clear schemas, agents can output actual objects instead of raw text. These objects can then be further manipulated programmatically, such as looping through sections or delegating specific parts of the data to other crews for further processing. This ensures outputs are not only predictable but also ready for seamless integration into workflows.
# Structured State Management
class Section(BaseModel):
title: str
high_level_goal: str
why_important: str
sources: List[str]
content_outline: List[str]
class EducationalPlan(BaseModel):
sections: List[Section]
@CrewBase
class EduResearchCrew():
"""EduResearch crew"""
# other codes...
@task
def planning_task(self) -> Task:
return Task(
config=self.tasks_config['planning_task'],
output_pydantic=EducationalPlan
)
[Joao Moura and Matthew Berman highlight that breaking tasks into specialized crews leads to significantly better results.]((https://youtu.be/KAsrbqJ8yas?t=324) This approach ensures that each part of a larger process is handled by agents specifically designed to excel at that stage, reducing errors and improving efficiency.
For example, when creating content, the process can be divided into three distinct crews:
Each crew comprises specialized agents designed to perform their specific tasks with precision. By using this approach, the workflow becomes modular, allowing teams to optimize outputs at every stage. This not only enhances the quality of the final deliverable but also makes the process scalable for larger projects.
Such specialization mirrors real-world collaboration, where assigning experts to defined roles often results in a more effective outcomes. Whether creating long-form content or managing complex workflows, dividing tasks into specialized crews ensures a structured, high-quality process tailored to the specific needs of the project.
I'll be using this approach to creating a crew that converts my notes into blog content.
It starts from data extraction and progresses through multiple steps, each handled by specialized crews to produce a final blog. Here’s how the flow works:
Scrape Crew:
Outline Crew:
Once the initial data is collected, the Outline Crew generates a structured outline. This outline ensures the blog has a clear direction and logical structure.
Generate Blog Content:
Using the outline, the Writing Crew creates the blog’s main content. This step involves converting the outline into a detailed blog draft based on the specified goals, word count, and writing style.
Evaluate Blog:
The draft is reviewed by the Review Crew to ensure quality, coherence, and alignment with the original goals. If the content doesn’t pass validation, feedback is provided.
Save Blog or Exit:
If the blog passes validation, it is saved as a finalized Markdown file. This indicates the completion of the process.
---
title: Introduction to LLMs
topic: Learning about Large Language Models
goal: To create a educational blog post on introduction to LLMs.
writing_style: simple, precise, conversational, professional, reader-focused
word_count: 3,000
---
- **Definition**:
- LLMs, or Large Language Models, are AI systems that excel at processing and generating natural language text.
- They are designed to mimic human-like understanding and conversation using advanced algorithms and training.
- Examples include GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers), and others.
- **Core Functionality**:
- LLMs take in natural language input such as questions, prompts, or commands and respond with meaningful text.
- They can handle a variety of tasks, from simple text completion to more complex tasks like summarization, translation, or reasoning.
- The models predict what comes next in a sequence of words, based on patterns learned during training.
- **Training**:
- LLMs are built on neural network architectures, specifically transformers, which are highly effective for language tasks.
- They are trained on massive datasets, including books, articles, websites, and more, to expose them to a wide range of language styles and knowledge domains.
- The training involves billions or even trillions of parameters—these are numerical values that the model learns to adjust during training to understand and generate language better.
- **Capabilities**:
- They excel in tasks like:
- Completing sentences or paragraphs given a starting point.
- Summarizing long documents into concise versions.
- Translating text between different languages.
- Answering questions based on the context of the input.
- LLMs are capable of multi-turn conversations, maintaining context over several interactions.
- They can also perform creative tasks, such as writing poetry, generating stories, or composing songs.
- **Adaptability**:
- LLMs can be fine-tuned for specific industries or tasks. For example:
- Legal or medical-specific models that understand domain-specific terms and context.
- AI assistants trained for customer support or technical troubleshooting.
- Developers can customize these models by providing additional, focused datasets to refine their responses.
- **Strengths**:
- LLMs are versatile, handling a wide range of topics, writing styles, and tasks.
- They operate quickly and efficiently, making them suitable for real-time applications.
- Their ability to process large-scale data allows them to provide detailed and nuanced answers.
- **Limitations**:
- Despite their sophistication, LLMs are not perfect:
- They may generate incorrect or misleading information if the input is ambiguous or outside their training data.
- Biases present in their training data can reflect in their outputs.
- They lack genuine understanding or reasoning; their "knowledge" comes from patterns in the data they were trained on.
- They can struggle with highly specialized or niche topics unless fine-tuned for those areas.
- **Applications**:
- LLMs are used in various fields, including:
- Virtual assistants like Siri, Alexa, or Google Assistant.
- Chatbots for customer service or technical support.
- Content generation tools for writing articles, blogs, or marketing copy.
- Research tools that help summarize studies or answer complex queries.
- Education, where they provide explanations, tutoring, or language translation.
- Creative writing aids for authors, screenwriters, or marketers.
- **Future Potential**:
- As LLMs evolve, they are expected to become more personalized and interactive, tailoring responses to individual users' needs.
- Their integration into everyday tools (e.g., word processors, email clients, project management software) can automate repetitive tasks.
- Improved reasoning and reduced bias are ongoing research areas that could make LLMs more reliable and ethical.
- They may power advanced AI systems for decision-making, providing insights, or assisting in complex workflows.
# types.py
from typing import List, Optional
from pydantic import BaseModel
class SubHeader(BaseModel):
subheader: str
subheader_description: str
class Header(BaseModel):
header: str
header_description: str
subheader: List[SubHeader]
class BlogOutline(BaseModel):
sections: List[Header]
class BlogFrontMatter(BaseModel):
title: str
topic: str
goal: str
writing_style: str
word_count: str
class Section(BaseModel):
title: str
content: str
class VerifyBlog(BaseModel):
feedback: Optional[str] = None
valid: bool = False
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import FileReadTool, MDXSearchTool
from crewai import LLM
from note_to_blog_flow.config import LLM_CONFIGS, FILE_PATH
from note_to_blog_flow.types import BlogFrontMatter
read_notes = FileReadTool(file_path=FILE_PATH)
semantic_search_notes = MDXSearchTool(mdx=FILE_PATH)
@CrewBase
class ScrapeCrew:
"""Blog Outline Crew"""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
openai_llm = LLM(
model=LLM_CONFIGS["openai"]["model"],
api_key=LLM_CONFIGS["openai"]["api_key"],
)
anthropic_llm = LLM(
model=LLM_CONFIGS["anthropic"]["model"],
api_key=LLM_CONFIGS["anthropic"]["api_key"],
)
@agent
def scraper(self) -> Agent:
return Agent(
config=self.agents_config["scraper"],
llm=self.openai_llm,
tools=[read_notes, semantic_search_notes],
verbose=True,
)
@task
def scrape_front_matter(self) -> Task:
return Task(
config=self.tasks_config["scrape_front_matter"],
output_pydantic=BlogFrontMatter,
)
@crew
def crew(self) -> Crew:
"""Creates the Blog Outline Crew"""
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
scraper:
role: >
Note Scraper
goal: >
Read through the author's notes and capture the title, topic, goal,
writing style, and word count from its front matter.
backstory: >
You are an efficient and detail-oriented scraper, adept at extracting
key information from text. Your task is to accurately capture the
necessary details from the author's notes to assist in the blog creation process.
scrape_front_matter:
description: >
Read through the author's notes and extract the following details from its front matter:
- Title
- Topic
- Goal
- Writing style
- Word count
expected_output: >
title: title value
topic: topic value
goal: goal value
writing_style: writing_style value
word_count: word_count value
agent: scraper
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import FileReadTool, MDXSearchTool
from crewai import LLM
from note_to_blog_flow.config import LLM_CONFIGS, FILE_PATH
from note_to_blog_flow.types import BlogOutline
read_notes = FileReadTool(file_path=FILE_PATH)
semantic_search_notes = MDXSearchTool(mdx=FILE_PATH)
@CrewBase
class OutlineCrew:
"""Blog Outline Crew"""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
openai_llm = LLM(
model=LLM_CONFIGS["openai"]["model"],
api_key=LLM_CONFIGS["openai"]["api_key"],
)
anthropic_llm = LLM(
model=LLM_CONFIGS["anthropic"]["model"],
api_key=LLM_CONFIGS["anthropic"]["api_key"],
)
@agent
def outliner(self) -> Agent:
return Agent(
config=self.agents_config["outliner"],
llm=self.openai_llm,
tools=[read_notes, semantic_search_notes],
verbose=True,
)
@task
def generate_outline(self) -> Task:
return Task(
config=self.tasks_config["generate_outline"],
output_pydantic=BlogOutline,
)
@crew
def crew(self) -> Crew:
"""Creates the Blog Outline Crew"""
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
outliner:
role: >
Senior Blog Content Planner
goal: >
Based on the author's notes,
generate a blog outline about {topic}. The generated
outline must include description, and the relevant
context from author's notes for each section of the blog.
Here is author's desired goals for the blog: {goal}
backstory: >
You are a skilled organizer, great at turning scattered
information into a structured format. Your goal is to create
clear, concise section outlines with all key topics and subtopics covered.
generate_outline:
description: >
Utilize tools to extract the author's notes and
create a blog outline with sections in sequential order
based on the author's notes. Ensure that each section
has a header, subheader (if necessary), a brief
description of the headers, subheaders, and the relevant context
from the author's notes that highlights the topic to be
covered. It's important to note that the blog is only going to be
{word_count} words or less. Also, make sure that you do not duplicate
any sections or topics in the outline.
Here are the author's desired goals for the blog: {goal}
expected_output: >
An outline of sections, with headers, subheaders for each header, and
descriptions of what each header will contain.
agent: outliner
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai import LLM
from note_to_blog_flow.config import LLM_CONFIGS, FILE_PATH
from note_to_blog_flow.types import Section
from crewai_tools import FileReadTool, MDXSearchTool
from note_to_blog_flow.tools.CharacterCounterTool import CharacterCounterTool
read_notes = FileReadTool(file_path=FILE_PATH)
semantic_search_notes = MDXSearchTool(mdx=FILE_PATH)
character_counter = CharacterCounterTool()
@CrewBase
class WriteBlogSectionCrew:
"""Write Blog Section Crew"""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
openai_llm = LLM(
model=LLM_CONFIGS["openai"]["model"],
api_key=LLM_CONFIGS["openai"]["api_key"],
)
anthropic_llm = LLM(
model=LLM_CONFIGS["anthropic"]["model"],
api_key=LLM_CONFIGS["anthropic"]["api_key"],
)
@agent
def writer(self) -> Agent:
return Agent(
config=self.agents_config["writer"], # pyright: ignore
llm=self.openai_llm,
tools=[read_notes, semantic_search_notes, character_counter],
verbose=True,
) # pyright: ignore
@task
def write_blog(self) -> Task:
return Task(
config=self.tasks_config["write_blog"], # pyright: ignore
output_pydantic=Section,
) # pyright: ignore
@crew
def crew(self) -> Crew:
"""Creates the Write Blog Section Crew"""
return Crew(
agents=self.agents, # pyright: ignore
tasks=self.tasks, # pyright: ignore
process=Process.sequential,
verbose=True,
)
writer:
role: >
Senior Blog Writer
goal: >
Write a well-structured blog
based on the blog outline, author's goal, and author's
notes. The blog must be written in markdown format, with a
{writing_style} style and contain around {word_count} words.
backstory: >
You are an exceptional writer, known for producing
blog content in a {writing_style}
writing style. You excel at transforming complex ideas into
readable and well-organized sections.
write_blog:
description: >
Utilize tools to extract the author's notes, check word count and
write a well-structured blog based on the title, blog outline, goal,
and the author's notes. You must ensure that any links or sources
in the author's notes are use in the blog.
The blog must be written in markdown format,
with a {writing_style} style and should contain around {word_count}
words.
Please incorporate the following feedback if present:
{feedback}
Here is the blog_outline:\n\n {blog_outline}
Here is the topic for the blog: {topic}
Here is the author's goal for the blog: {goal}
Here is the title of the blog: {title}
The blog must have the following formating rules: \n
- All headers must be H3 type "###"
- All sub-headers must be H4 type "####"
- All code block examples must with the code language, Python for example:\n
```python
def function():
# code goes here
```
expected_output: >
A markdown-formatted blog that:
- Covers the blog outline, author's goal, and the author's notes comprehensively.
- Adheres to the specified {writing_style} and contains around {word_count} words.
- Uses H3 type "###" for all headers.
- Uses H4 type "####" for all sub-headers.
- Properly formats all code block examples with the specified code language.
agent: writer
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai import LLM
from note_to_blog_flow.config import LLM_CONFIGS, FILE_PATH
from note_to_blog_flow.types import VerifyBlog
from crewai_tools import FileReadTool, MDXSearchTool
from note_to_blog_flow.tools.CharacterCounterTool import CharacterCounterTool
read_notes = FileReadTool(file_path=FILE_PATH)
semantic_search_notes = MDXSearchTool(mdx=FILE_PATH)
character_counter = CharacterCounterTool()
@CrewBase
class ReviewBlogCrew:
"""Review Blog Crew"""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
openai_llm = LLM(
model=LLM_CONFIGS["openai"]["model"],
api_key=LLM_CONFIGS["openai"]["api_key"],
)
anthropic_llm = LLM(
model=LLM_CONFIGS["anthropic"]["model"],
api_key=LLM_CONFIGS["anthropic"]["api_key"],
)
@agent
def blog_verifier(self) -> Agent:
return Agent(
config=self.agents_config["blog_verifier"],
llm=self.openai_llm,
tools=[read_notes, semantic_search_notes, character_counter],
verbose=True,
)
@task
def verify_blog(self) -> Task:
return Task(
config=self.tasks_config["verify_blog"],
output_pydantic=VerifyBlog,
)
@crew
def crew(self) -> Crew:
"""Creates the Write Blog Section Crew"""
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
blog_verifier:
role: >
Senior Blog Verifier
goal: >
Ensure that the blog meets the strict guidelines:
Review the generated blog content to ensure it follows the specified
format guidelines and template. The review must check all rules a:
backstory: >
You are an exceptional content reviewer, known for your meticulous
attention to detail and adherence to formatting guidelines. You excel
at ensuring that all content is well-structured and follows the specified
templates and rules.
verify_blog:
description: >
Review the generated blog content to ensure it follows the
strict guidelines. Utilize your tools
to read the author's notes. Additionally, if you believe there
are any issues with the blog or ways it could be improved,
such as the structure of the blog, rhythm, writing stlye,
tone, word choice, and content, please provide feedback.
If any of the criterias are not met, the post is considered invalid.
Provide actionable changes about what is wrong and what actions
need to be taken to fix the post.
Your final response must include:
- Valid: True/False
- Feedback: Provide commentary if the post fails any of the criteria.
The review must check the following guidelines:
- All headers are H3 type "###".
- All sub-headers are H4 type "####".
- All code block examples are properly formatted with the code language specified.
- All links and sources shared in the author's notes are covered in the blog
Here is the blog_outline:\n\n {blog_outline}
Here is the topic for the blog: {topic}
Here is the author's goal for the blog: {goal}
Here is the title of the blog: {title}
Here is the blog to verify:
{blog}
expected_output: >
Pass: True/False
Feedback: Commentary here if failed.
agent: blog_verifier
from typing import Type
from crewai_tools import BaseTool
from pydantic import BaseModel, Field
class CharacterCounterInput(BaseModel):
"""Input schema for CharacterCounterTool."""
text: str = Field(..., description="The string to count characters in.")
class CharacterCounterTool(BaseTool):
name: str = "Character Counter Tool"
description: str = "Counts the number of characters in a given string."
args_schema: Type[BaseModel] = CharacterCounterInput
def _run(self, text: str) -> str:
character_count = len(text)
return f"The input string has {character_count} characters."
import os
# config.py
LLM_CONFIGS = {
"openai": {
"model": "gpt-4o-mini",
"api_key": os.getenv("OPENAI_API_KEY"),
},
"anthropic": {
"model": "anthropic/claude-3-5-sonnet-20240620",
"api_key": os.getenv("ANTHROPIC_API_KEY"),
},
}
LANGTRACE_API_KEY = os.getenv("LANGTRACE_API_KEY")
FILE_PATH = "./notes.md"
from typing import List, Optional
from crewai.flow.flow import Flow, listen, start, router, or_
from pydantic import BaseModel
from note_to_blog_flow.types import BlogOutline
from note_to_blog_flow.crews.outline_crew.outline_crew import OutlineCrew
from note_to_blog_flow.crews.writing_crew.writing_crew import WriteBlogSectionCrew
from note_to_blog_flow.crews.review_crew.review_crew import ReviewBlogCrew
from note_to_blog_flow.crews.editing_crew.editing_crew import EditingCrew
from note_to_blog_flow.crews.scrape_crew.scrape_crew import ScrapeCrew
from note_to_blog_flow.config import LANGTRACE_API_KEY
from langtrace_python_sdk import langtrace
import os
import json
langtrace.init(api_key=LANGTRACE_API_KEY)
class BlogState(BaseModel):
title: str = ""
topic: str = ""
goal: str = ""
word_count: str = ""
writing_style: str = ""
blog: str = ""
blog_outline: List[BlogOutline] = []
feedback: Optional[str] = None
date_time: str = ""
retry_count: int = 0
valid: bool = False
class BlogFlow(Flow[BlogState]):
initial_state = BlogState
if not os.path.exists("output"):
os.makedirs("output")
@start()
def scrape_crew(self):
print("Kickoff the Scrape Crew")
output = ScrapeCrew().crew().kickoff()
print("Front Matter:", output)
print("Output type:", type(output))
try:
self.state.title = output["title"]
self.state.topic = output["topic"]
self.state.goal = output["goal"]
self.state.word_count = output["word_count"]
self.state.writing_style = output["writing_style"]
except AttributeError:
raise ValueError(
"CrewOutput does not return the expected attributes. Verify its structure."
)
return output
@listen(scrape_crew)
def outline_crew(self):
print("Kickoff the Outline Crew")
output = (
OutlineCrew()
.crew()
.kickoff(
inputs={
"goal": self.state.goal,
"topic": self.state.topic,
"word_count": self.state.word_count,
}
)
)
sections = output["sections"]
print("Sections:", sections)
self.state.blog_outline = sections
print("Blog Outline:", self.state.blog_outline)
return sections
@listen(outline_crew)
def save_outline(self):
print("Saving the Outline")
file_name = f"./output/{self.state.title.replace(' ', '_')}_outline.json"
with open(file_name, "w", encoding="utf-8") as file:
json.dump(
[section.model_dump() for section in self.state.blog_outline],
file,
ensure_ascii=False,
indent=4,
)
@listen(or_(outline_crew, "retry"))
def generate_blog_content(self):
if self.state.retry_count == 0:
print("Kickoff the Writing Crew")
output = (
WriteBlogSectionCrew()
.crew()
.kickoff(
inputs={
"blog_outline": self.state.blog_outline,
"goal": self.state.goal,
"topic": self.state.topic,
"word_count": self.state.word_count,
"writing_style": self.state.writing_style,
"title": self.state.title,
"feedback": self.state.feedback,
}
)
)
blog_content = output["content"]
print("Blog:", blog_content)
self.state.blog = blog_content
return blog_content
if not self.state.valid and self.state.retry_count > 0:
print("Kickoff the Revised Writing Crew")
output = (
EditingCrew()
.crew()
.kickoff(
inputs={
"blog": self.state.blog,
"feedback": self.state.feedback,
}
)
)
blog_content = output["content"]
print("Blog:", blog_content)
self.state.blog = blog_content
return blog_content
@router(generate_blog_content)
def evaluate_blog(self):
print("Kickoff Verify the Blog Crew")
print("Retry count:", self.state.retry_count)
if self.state.retry_count > 3:
print("Max retry exceeded")
return "max_retry_exceeded"
else:
print("Retrying, current count:", self.state.retry_count)
result = (
ReviewBlogCrew()
.crew()
.kickoff(
inputs={
"blog_outline": self.state.blog_outline,
"blog": self.state.blog,
"goal": self.state.goal,
"topic": self.state.topic,
"word_count": self.state.word_count,
"writing_style": self.state.writing_style,
"title": self.state.title,
"date_time": self.state.date_time,
}
)
)
self.state.valid = result["valid"]
self.state.feedback = result["feedback"]
print("valid", self.state.valid)
print("feedback", self.state.feedback)
self.state.retry_count += 1
if self.state.valid:
print("Validation successful, transitioning to complete")
return "complete"
print("Validation failed, transitioning to retry")
return "retry" # Emit retry state explicitly
@listen("complete")
def save_blog(self):
print("Saving the Blog")
print("Blog is valud")
print("Blog:", self.state.blog)
file_name = f"./output/{self.state.title.replace(' ', '_')}_complete.md"
with open(file_name, "w", encoding="utf-8") as file:
file.write(self.state.blog)
@listen("max_retry_exceeded")
def max_retry_exceeded_exit(self):
print("Max retry count exceeded")
print("Blog:", self.state.blog)
print("Feedback:", self.state.feedback)
file_name = (
f"./output/{self.state.title.replace(' ', '_')}_max_retry_exceeded.md"
)
with open(file_name, "w", encoding="utf-8") as file:
file.write(self.state.blog)
def kickoff():
blog_flow = BlogFlow()
blog_flow.kickoff()
blog_flow.plot()
if __name__ == "__main__":
kickoff()
BlogFlow
The BlogFlow
code defines a step-by-step process for generating, validating, and saving a blog. It combines multiple crews, each specializing in a specific task, to automate the entire blogging workflow.
The purpose of BlogFlow
is to manage the blog creation process, from initial topic scraping to generating a polished final output. It ensures that each step is structured, validated, and, if needed, retried. By leveraging modular crews, it splits complex tasks into manageable components, making the process efficient and scalable.
State Management (BlogState
):
BlogState
is the central storage for all information about the blog being created, including the title, topic, outline, content, and feedback.Crew Operations:
ScrapeCrew: Extracts initial information about the blog, such as title, topic, and goals.
Retry Logic:
If the blog fails validation, the system retries up to three times with edits based on feedback. This ensures high-quality results while preventing endless retries.
Saving Results:
Successful blogs are saved as .md
files in an output directory.