> Source URL: /cs.guide
---
title: "Prompt Engineering for Computer Science & IT"
description: "How LLMs are becoming part of the standard software-engineering toolkit — and what you could build with them in this course."
---

[@styles]: ./styles.css

# Prompt Engineering for Computer Science & IT

For CS and IT students, LLMs have moved from research curiosity to daily tool. Agents, RAG pipelines, and evaluation harnesses are showing up in job descriptions, open-source projects, and grad-school research alike.

## Where this is showing up in CS & IT

- Coding agents — [Cursor](https://cursor.com), [Claude Code](https://www.anthropic.com/claude-code), and [GitHub Copilot Workspace](https://githubnext.com/projects/copilot-workspace) — now plan and ship multi-file changes inside real repos.
- Open-source RAG frameworks ([LangChain](https://www.langchain.com), [LlamaIndex](https://www.llamaindex.ai), [Haystack](https://haystack.deepset.ai)) power internal search and Q&A at most large engineering orgs.
- Evaluation is becoming its own subfield — [OpenAI Evals](https://github.com/openai/evals), the [UK AI Security Institute's Inspect framework](https://inspect.aisi.org.uk), and benchmarks like [SWE-bench](https://www.swebench.com) and [HumanEval](https://github.com/openai/human-eval) drive model selection.
- Structured outputs, tool/function calling, and multi-agent orchestration are hardening into standard production patterns.

## Projects you could build in this course

- A coding agent that navigates a repo and proposes pull requests
- A RAG system over a large technical documentation corpus
- An evaluation harness that benchmarks prompts across models and tracks regressions

[← Back to Thinking With Machines](./index.path.md)


---

## Backlinks

The following sources link to this document:

- [Computer Science & IT](/index.path.llm.md)
