Skip to main content
Python Prompts

ChatGPT Prompts for Python Developers

From pandas DataFrames to Flask APIs — these prompts turn ChatGPT into your Python pair programmer.

12 prompts|Updated March 2026

Python is the most popular language for AI, data science, and automation. These prompts help you write cleaner scripts, debug tricky issues, build APIs, analyze data with pandas, and automate repetitive tasks — all with ChatGPT as your coding partner.

1

Pandas DataFrame Manipulation

I have a pandas DataFrame with the following columns and sample data:

Columns: [list your columns, e.g., "user_id, signup_date, plan_type, monthly_revenue, churn_date"]
Sample rows:
[paste 3-5 sample rows or describe the data shape]

I need to:
1. [describe your transformation, e.g., "group by plan_type and calculate monthly cohort retention"]
2. [describe any filtering, e.g., "exclude users who signed up before 2024"]
3. [describe the desired output shape, e.g., "pivot table with months as columns and plan types as rows"]

Write clean, production-ready pandas code. Use method chaining where it improves readability. Add comments explaining any non-obvious transformations. Avoid deprecated methods like .append() — use pd.concat() instead.
Include your actual column names and dtypes. Mentioning the pandas version you're using helps avoid deprecated API suggestions.
2

Web Scraping with BeautifulSoup & Selenium

Write a Python web scraper for the following target:

Target URL: [paste the URL or describe the site structure]
Data to extract: [e.g., "product name, price, rating, and number of reviews from each listing card"]
Page structure notes: [e.g., "data loads dynamically via JavaScript" or "paginated with ?page=N query param"]

Requirements:
- Use BeautifulSoup4 if the page is static HTML, or Selenium if JavaScript rendering is required
- Implement polite scraping: add a User-Agent header, 2-3 second delays between requests, and respect robots.txt
- Handle common failures: connection timeouts, missing elements, and HTTP 429 rate limiting
- Store results in a pandas DataFrame and export to CSV
- Add logging so I can monitor progress on large scrapes

If Selenium is needed, use the headless Chrome configuration with webdriver-manager for automatic driver setup.
Always check a site's robots.txt and terms of service before scraping. For JavaScript-heavy sites, mention that you need Selenium upfront.
3

Flask / FastAPI Endpoint Builder

Build a [Flask / FastAPI] REST API endpoint with the following specification:

Endpoint: [HTTP method] [path, e.g., "POST /api/v1/users"]
Request body:
[describe the expected JSON payload with field types and validation rules]

Business logic:
[describe what the endpoint should do, e.g., "validate the email is unique, hash the password with bcrypt, store in SQLite, return the created user without the password field"]

Requirements:
- Input validation with descriptive error messages (use Pydantic models if FastAPI)
- Proper HTTP status codes (201 for creation, 400 for validation errors, 409 for conflicts)
- Error handling that never leaks stack traces to the client
- Type hints on all function signatures
- A brief docstring for the endpoint

Also include a sample curl command to test the endpoint and a matching pytest test function.
Specify Flask or FastAPI upfront — they have very different patterns. If you need async, go with FastAPI.
4

File Automation Script

Write a Python script that automates the following file operation:

Task: [describe the task, e.g., "monitor a folder for new CSV files, validate their headers match an expected schema, rename them with a timestamp prefix, and move valid files to /processed and invalid files to /errors"]

Source directory: [path or describe]
Destination(s): [path(s) or describe]
File types to handle: [e.g., ".csv", ".json", ".pdf"]

Requirements:
- Use pathlib (not os.path) for all path operations
- Handle edge cases: file already exists at destination, permission errors, empty files, locked files
- Add logging with timestamps to both console and a log file
- Make the script idempotent — running it twice on the same files should not cause errors or duplicates
- Include a dry-run mode (--dry-run flag via argparse) that prints what would happen without making changes
- Cross-platform compatible (Windows + macOS + Linux)
Always ask for a --dry-run flag on file automation scripts. It saves you from accidentally moving or deleting the wrong files.
5

Regex Pattern Generator for Python

Write a Python regular expression for the following pattern using the re module:

Pattern to match: [describe in plain English, e.g., "extract all dollar amounts from text, including formats like $1,234.56, $50, and $1,234,567.89"]

Must match these examples:
[list 4-5 valid strings]

Must NOT match these examples:
[list 3-4 invalid strings that look similar]

Provide:
1. The compiled regex pattern with re.VERBOSE for readability and inline comments explaining each group
2. A function that uses re.findall() or re.search() depending on the use case
3. Named capture groups where they make the output more usable
4. A set of assertions (assert statements) that verify the match/no-match examples above
5. Any edge cases or limitations of the pattern
Use re.VERBOSE mode for complex patterns — it lets you add comments and whitespace inside the regex for readability.
6

Unit Tests with pytest

Write comprehensive pytest tests for the following Python function:

Function to test:
[paste your function code]

Module location: [e.g., "src/utils/data_processing.py"]

Generate tests covering:
1. Happy path — 3-4 typical inputs with expected outputs
2. Edge cases — empty input, None, zero-length strings, empty lists/dicts, single-element collections
3. Error cases — invalid types, out-of-range values, and verify the correct exception is raised using pytest.raises
4. Boundary conditions specific to this function's logic

Requirements:
- Use pytest fixtures for any shared setup (database connections, temp files, sample DataFrames)
- Use @pytest.mark.parametrize for testing multiple input/output pairs
- Descriptive test names following the pattern: test_[function]_[scenario]_[expected_result]
- If the function has external dependencies, show how to mock them with unittest.mock or pytest-mock
- Include a conftest.py snippet if fixtures should be shared across test files
Ask for @pytest.mark.parametrize when you have many input/output pairs — it keeps tests DRY and easy to extend.
7

Data Visualization with Matplotlib & Seaborn

Create a Python data visualization for the following dataset and goal:

Data description: [e.g., "monthly revenue by product category, 24 months, 5 categories"]
Data format: [e.g., "pandas DataFrame with columns: month, category, revenue"]
Visualization goal: [e.g., "show the revenue trend per category over time, highlight which category grew fastest"]

Requirements:
- Use [matplotlib / seaborn / both] with a clean, publication-ready style
- Set figure size appropriate for [dashboard / presentation slide / blog post]
- Include: title, axis labels with units, legend, and gridlines
- Use a colorblind-friendly palette (e.g., seaborn's "colorblind" or "Set2")
- Annotate any notable data points (peaks, crossovers, anomalies)
- Export to PNG at 300 DPI and optionally as SVG
- Use plt.tight_layout() to prevent label clipping

If sample data is needed, generate realistic fake data with numpy/pandas so the code runs immediately.
Specify the output medium (dashboard, slide, blog) — it affects font sizes, aspect ratios, and color choices significantly.
8

API Integration with Requests

Write a Python module that integrates with the following external API:

API: [name and base URL, e.g., "Stripe API — https://api.stripe.com/v1"]
Authentication: [e.g., "Bearer token in Authorization header" or "API key as query parameter"]
Endpoints I need:
1. [e.g., "GET /customers — list customers with pagination"]
2. [e.g., "POST /charges — create a charge"]

Requirements:
- Use the requests library with a Session object for connection reuse
- Implement retry logic with exponential backoff for 429 and 5xx errors (use urllib3.Retry or tenacity)
- Centralized error handling: raise custom exceptions (e.g., APIRateLimitError, APIAuthError) with the response body included
- Type hints and dataclasses or Pydantic models for response parsing
- Environment variable for the API key (never hardcoded) using os.environ.get()
- Logging of request method, URL, status code, and response time (but never log the API key or sensitive payloads)
- A simple usage example at the bottom in an if __name__ == "__main__" block
Always use a Session object when making multiple requests to the same API — it reuses TCP connections and is significantly faster.
9

Error Handling Best Practices Refactor

Refactor the following Python code to follow error handling best practices:

Code:
[paste your code with bare except clauses, broad exception handling, or missing error handling]

Apply these improvements:
1. Replace bare "except:" and "except Exception:" with specific exception types
2. Add context to re-raised exceptions using "raise ... from e" syntax
3. Use try/except/else/finally blocks correctly — move success logic to the else clause
4. Implement custom exception classes where the code would benefit from domain-specific errors
5. Add proper logging in except blocks (logger.exception for unexpected errors, logger.warning for expected ones)
6. Ensure resources (files, connections, locks) are cleaned up using context managers (with statements)
7. Handle the "look before you leap" vs "easier to ask forgiveness" tradeoff appropriately for each case

Show the refactored code with comments explaining why each change improves reliability. Include a brief exceptions hierarchy diagram if custom exceptions are introduced.
The single biggest improvement is usually replacing bare 'except:' with specific exceptions — it prevents accidentally catching KeyboardInterrupt and SystemExit.
10

List Comprehension & Performance Optimizer

Optimize the following Python code for readability and performance:

Code:
[paste your code with for loops, nested loops, or inefficient patterns]

Please:
1. Convert eligible for-loops to list/dict/set comprehensions where readability improves (not where it hurts)
2. Replace manual accumulation patterns with built-in functions (sum, any, all, max, min, collections.Counter)
3. Use generator expressions instead of list comprehensions when the result is only iterated once (saves memory)
4. Identify any O(n²) patterns (e.g., "if x in list" inside a loop) and fix them with sets or dicts for O(1) lookups
5. Use itertools (chain, groupby, product, islice) where they simplify the logic
6. Profile-worthy bottlenecks: flag any code that would benefit from numpy vectorization for large datasets

Show the optimized version side-by-side with the original in comments. Include Big-O complexity before and after for any algorithmic improvements.
Don't over-optimize — a clear for-loop is better than a 200-character nested comprehension. Ask ChatGPT to keep comprehensions under one line when possible.
11

Virtual Environment & Dependency Setup

Create a complete Python project setup for the following:

Project name: [e.g., "data-pipeline"]
Python version: [e.g., "3.11+"]
Key dependencies: [list main libraries, e.g., "pandas, requests, sqlalchemy, pytest, black, ruff"]
Package manager preference: [pip + venv / poetry / pdm / conda]

Generate:
1. Step-by-step terminal commands to create the virtual environment and install dependencies
2. A pyproject.toml (or requirements.txt + requirements-dev.txt if using pip) with pinned versions
3. A .python-version file for pyenv compatibility
4. A basic project structure (src layout vs flat layout — recommend based on project size)
5. A Makefile or justfile with common commands: install, test, lint, format, clean
6. A .gitignore tailored for Python (venv, __pycache__, .pytest_cache, dist, .env)
7. Pre-commit config with black, ruff, and mypy hooks

Explain your reasoning for the project structure choice.
For new projects in 2026, pyproject.toml is the standard. Avoid setup.py unless you need backward compatibility with very old tooling.
12

CSV/JSON Data Processing Pipeline

Write a Python data processing pipeline for the following workflow:

Input: [describe your input files, e.g., "a directory of CSV files, each ~500MB, with columns: timestamp, sensor_id, temperature, humidity, status"]
Processing steps:
1. [e.g., "read all CSVs and combine into a single DataFrame"]
2. [e.g., "clean: drop rows where temperature is null or outside -50 to 60 range"]
3. [e.g., "transform: resample to hourly averages, add a rolling 24h mean column"]
4. [e.g., "enrich: join with a sensor_metadata.json file to add location info"]
5. [e.g., "output: save to a partitioned Parquet file by date, and a summary CSV"]

Requirements:
- Use chunked reading (pd.read_csv with chunksize) if files are large
- Add a progress bar with tqdm for long-running steps
- Validate data at each stage with assertions or pandera schemas
- Log row counts before and after each cleaning step so data loss is traceable
- Handle encoding issues (UTF-8 with fallback to latin-1)
- Include a main() function with argparse for input_dir, output_dir, and --verbose flag
- Make it resumable: skip files that have already been processed (check output directory)
For files over 100MB, always use chunked reading or Dask. Loading a 2GB CSV into a single DataFrame will exhaust memory on most machines.

How to Use These Prompts

Copy any prompt above, replace the bracketed placeholders with your actual code, data, or project details, then paste into ChatGPT. Use GPT-4 or Claude for longer code snippets that need more context. Focus each conversation on a single task — scraping, testing, or data processing — rather than mixing multiple goals. If you use Prompt Anything Pro, you can save your most-used Python prompts as templates and trigger them instantly on any webpage, including GitHub, Jupyter notebooks, or documentation sites.

Need More Prompts?

Get personalized AI suggestions for additional prompts tailored to your specific needs.

AI responses are generated independently and may vary

Frequently Asked Questions

Code Faster with AI Anywhere

Prompt Anything Pro brings AI prompts to any webpage — use them directly in GitHub, Stack Overflow, or your docs.