I'm thrilled to introduce Vishu (MCP) Suite, an open-source application I've been developing that takes a novel approach to vulnerability assessment and reporting by deeply integrating Large Language Models (LLMs) into its core workflow. What's the Big Idea? Instead of just using LLMs for summarization at the end, Vishu (MCP) Suite employs them as a central reasoning engine throughout the assessment process. This is managed by a robust Model Contet Protocol (MCP) agent scaffolding designed for complex task execution. Core Capabilities & How LLMs Fit In: 1. Intelligent Workflow Orchestration: The LLM, guided by the MCP, can: 2. • Plan and Strategize: Using a SequentialThinkingPlanner tool, the LLM breaks down high-level goals (e.g., "assess example.com for web vulnerabilities") into a series of logical thought steps. It can even revise its plan based on incoming data! • Dynamic Tool Selection & Execution: Based on its plan, the LLM chooses and executes appropriate tools from a growing arsenal. Current tools include: • ◇ Port Scanning (PortScanner) ◇ Subdomain Enumeration (SubDomainEnumerator) ◇ DNS Enumeration (DnsEnumerator) ◇ Web Content Fetching (GetWebPages, SiteMapAndAnalyze) ◇ Web Searches for general info and CVEs (WebSearch, WebSearch4CVEs) ◇ Data Ingestion & Querying from a vector DB (IngestText2DB, QueryVectorDB, QueryReconData, ProcessAndIngestDocumentation) ◇ Comprehensive PDF Report Generation from findings (FetchDomainDataForReport, RetrievePaginatedDataSection, CreatePDFReportWithSummaries)
• Contextual Result Analysis: The LLM receives tool outputs and uses them to inform its next steps, reflecting on progress and adapting as needed. The REFLECTION_THRESHOLD in the client ensures it periodically reviews its overall strategy.
• Unique MCP Agent Scaffolding & SSE Framework: • ◇ The MCP-Agent scaffolding (ReConClient.py): This isn't just a script runner. The MCP-scaffolding manages "plans" (assessment tasks), maintains conversation history with the LLM for each plan, handles tool execution (including caching results), and manages the LLM's thought process. It's built to be robust, with features like retry logic for tool calls and LLM invocations. ◇ Server-Sent Events (SSE) for Real-Time Interaction (Rizzler.py, mcp_client_gui.py): The backend (FastAPI based) communicates with the client (including a Dear PyGui interface) using SSE. This allows for: ◇ ▪ Live Streaming of Tool Outputs: Watch tools like port scanners or site mappers send back data in real-time. ▪ Dynamic Updates: The GUI reflects the agent's status, new plans, and tool logs as they happen. ▪ Flexibility & Extensibility: The SSE framework makes it easier to integrate new streaming or long-running tools and have their progress reflected immediately. The tool registration in Rizzler.py (@mcpServer.tool()) is designed for easy extension.
We Need Your Help to Make It Even Better! This is an ongoing project, and I believe it has a lot of potential. I'd love for the community to get involved: ◇ Try it Out: Clone the repo, set it up (you'll need a GOOGLE_API_KEY and potentially a local SearXNG instance, etc. – see .env patterns), and run some assessments! ◇ ▪ GitHub Repo: https://github.com/seyrup1987/ReconRizzler-Alpha