Agentic

When we research our codebase to accomplish complex tasks we tend to think “wide” as we usually think of the implications, integration, dependencies etc. A key for proper execution is deeply grounded in our ability to find all the interfaces and edge cases. For that as humans we bring to a room all stakeholders (Products, Algo, Backend, Fronted etc…) and start to plan in several iterations how we are going to achieve optimal execution.

With the above in mind our tools mimic this behavior in 4 level of complexity:

  1. Only context is needed (/context): This level involves providing a small portion of context to the Large Language Model (LLM) to familiarize it with domain-specific knowledge. Examples include referencing documentation or asking basic questions for explicit knowledge within the codebase (e.g., "Find all areas in the code that call the auth API").

  2. Multiple context representations (/ask): Superior context is not only about similarity of code chunks; for simple tasks it gives the LLM a boost, but when facing additional complexity, having broad knowledge is key. Here we use multiple context providers, each one an "expert" in their domain, to provide the LLM the most relevant context in one shot for each query.

  3. Deep reasoning (/deep-research): This level aims to deeply explore the concept of the user's query. Similar to how humans would loop around areas of interest until they have all the answers, this approach allows an agent to fetch context around a user query and then use its reasoning and tools to ask more questions and find relevant data for a more profound answer.

  4. Cross disciplines deep reasoning (/deep-research): This is the most complex level. It applies the principles of deep reasoning but adds a core step: asking at an organizational level, "What are the domains cross repo / in current repo we need to explore that are relevant to the user query?" The goal is to identify all critical areas in the system, requiring broad knowledge of the codebase and its history. Each path is then explored separately, converging to a master agent that decides if the retrieved data is relevant to the user's query. This level aims to achieve "principal eng reasoning and knowledge," where the agent can map the entire organization and find all relevant exploration paths.

Last updated

Was this helpful?