███╗ ██╗███████╗████████╗██████╗ █████╗ ████╗ ██║██╔════╝╚══██╔══╝██╔══██╗██╔══██╗ ██╔██╗ ██║█████╗ ██║ ██████╔╝███████║ ██║╚██╗██║██╔══╝ ██║ ██╔══██╗██╔══██║ ██║ ╚████║███████╗ ██║ ██║ ██║██║ ██║ ╚═╝ ╚═══╝╚══════╝ ╚═╝ ╚═╝ ╚═╝╚═╝ ╚═╝An AI‑powered financial intelligence platform for detecting and investigating suspicious activity across accounts, persons, and companies. [](LICENSE) [](https://python.org) [](https://reactjs.org) [](https://neo4j.com) [](https://netra-ai.vercel.app/) [](https://netra-8j8n.onrender.com)
Watch the demo on YouTube: https://youtu.be/r_G-eIlJKkU
Project NETRA provides a unified workflow for ingesting datasets (CSV/ZIP), calculating hybrid risk scores, inspecting networks, and generating AI‑assisted PDF reports. It ships with synthetic datasets and lets investigators upload data from the UI.
Highlights:
flowchart LR
%% Client Layer
subgraph Client
U[User]
FE[Frontend - React + Vite]
end
%% Backend API and Services
subgraph Backend
API[REST API]
RS[risk_scoring.py]
RG[report_generator.py]
GA[graph_analysis.py]
CM[case_manager.py]
AS[ai_summarizer.py]
end
%% Data Sources
subgraph Data Sources
CSV[CSV files<br/>backend/generated-data]
NEO[Neo4j optional]
FS[Firestore optional]
end
%% Client -> API
U -->|actions| FE
FE -->|Fetch JSON or PDF| API
FE -->|Upload ZIP or CSV| API
%% API -> Services
API --> RS
API --> RG
API --> GA
API --> CM
API --> AS
%% Services <-> Data
RS --> CSV
GA --> NEO
CM --> FS
RS -->|AlertScores.csv| CSV
%% Responses
RG -->|PDF| FE
API -->|JSON| FE
sequenceDiagram
participant User
participant FE as Frontend (React)
participant API as Flask API (/api)
participant Risk as Risk Scoring
participant Graph as Graph Analysis
participant Report as Report Generator
participant Store as Firestore (optional)
participant Neo4j as Neo4j (optional)
participant CSV as CSV Data
User->>FE: Open app / Login
FE->>API: GET /alerts (Bearer token)
API->>Risk: Load scores
Risk->>CSV: Read AlertScores.csv
API-->>FE: Alerts JSON
User->>FE: Upload dataset (CSV/ZIP)
FE->>API: POST /datasets/upload
API->>CSV: Replace files
API->>Risk: Re-run analysis
API-->>FE: Upload OK
User->>FE: Create case
FE->>API: POST /cases
API->>Store: Create/Update case (if configured)
API-->>FE: Case created (caseId)
User->>FE: Investigate person/case
FE->>API: GET /graph/:personId
API->>Graph: Build graph
Graph->>Neo4j: Query (if available)
Graph->>CSV: Synthesize fallback
API-->>FE: Graph JSON
User->>FE: Generate report
FE->>API: GET /report/:id
API->>Report: Compile PDF
Report->>Risk: Pull scores
Report->>CSV: Fetch details
API-->>FE: PDF (blob)
FE-->>User: Render views / Download report
flowchart TD
Start([User opens app]) --> AuthCheck[Auth check via AuthProvider]
AuthCheck -->|Authenticated| GoDashboard[Route to /dashboard]
AuthCheck -->|No auth| GoLogin[Route to /login]
subgraph Dashboard
GoDashboard --> FetchAlerts[GET /alerts]
FetchAlerts --> ShowAlerts[Render alerts & metrics]
ShowAlerts --> ActionTriage[Open Triage]
ShowAlerts --> ActionInvestigate[Open Investigation]
ShowAlerts --> ActionReporting[Open Reporting]
end
subgraph Triage
ActionTriage --> CreateCase[POST /cases]
CreateCase --> CaseCreated[(caseId)]
CaseCreated --> NavWorkspace[Go to /workspace/:caseId]
end
subgraph Investigation_Workspace
ActionInvestigate --> LoadGraph[GET /graph/:personId]
NavWorkspace --> LoadGraph
LoadGraph --> ViewGraph[React Flow graph + details]
ViewGraph --> UpdateNotes[PUT /cases/:id/notes]
UpdateNotes --> NotesSaved[Notes saved - Firestore or local]
end
subgraph Reporting
ActionReporting --> GetReport[GET /report/:id]
GetReport --> PDF[PDF blob]
PDF --> Download[Trigger download]
end
subgraph Settings_and_Datasets
Settings[Open Settings] --> Upload[POST /datasets/upload - CSV or ZIP]
Upload --> Reanalyze[Run analysis]
Reanalyze --> AlertsUpdated[Updated AlertScores.csv]
AlertsUpdated --> FetchAlerts
end
GoLogin --> LoginFlow[Login - Firebase or mock token]
LoginFlow --> AuthCheck
Backend (Flask):
backend/generated-data/).services/risk_scoring.py), AI summarizer, report generator, optional graph analyzer.Frontend (React + Vite):
Data generation:
data-generation/generate_data.py produces CSVs into backend/generated-data/.backend/data-generation/.Prerequisites: Python 3.10+, Node 18+.
Backend:
1) cd backend
2) python -m venv venv (Windows: venv\Scripts\activate, macOS/Linux: source venv/bin/activate)
3) pip install -r requirements.txt
4) Set environment variables (optional but recommended):
GEMINI_API_KEY for AI summaries (otherwise a rule‑based fallback is used)FRONTEND_URL for CORS (e.g., http://localhost:5173)FIREBASE_CREDENTIALS or GOOGLE_APPLICATION_CREDENTIALS if using Firestore
5) Run: python app.py (serves at http://localhost:5001)Frontend:
1) cd frontend
2) npm install
3) Optional: set VITE_API_URL to your backend API base (e.g., http://localhost:5001/api). If unset, it auto‑detects localhost and uses http://localhost:5001/api.
4) npm run dev (http://localhost:5173)
Authentication (local/mock):
Authorization: Bearer mock-jwt-token-12345.Upload via the UI: Settings → Data Management.
Schemas (minimal required columns):
Sample data that triggers alerts:
samples/ at the repo root (ready‑named CSVs) or frontend/public/samples/ for individual examples and a README.Base path: /api.
/alerts → list alerts (reads AlertScores.csv)./persons?q=<query> → search persons./investigate/<person_id> → risk breakdown for a person./graph/<person_id> → network; synthesizes from CSVs if Neo4j is empty or unavailable./report/<person_or_case_id> → PDF report; accepts person_id or case_id./cases → create a case; body must include person_id or embed it in risk_profile.person_details./cases → list cases (Firestore or local fallback)./datasets/upload → upload CSV/ZIP; validates, reloads, and reruns analysis./datasets/metadata → seed/snapshot/counts (if metadata.json is present; counts are derived regardless)./run-analysis (or ?sync=1) and GET /run-analysis/status → control risk analysis./settings/profile, /settings/api-key, /settings/theme, /settings/regenerate-data, /settings/clear-cases.Auth:
Authorization: Bearer <token>.mock-jwt-token-12345./api/report/<id>. You can pass a caseId or personId.AlertScores.csv to match the dashboard.data-generation/generate_data.py writes CSVs to backend/generated-data/.backend/data-generation/load_to_neo4j.py for loading.project-NETRA/
├── backend/
│ ├── app.py # Flask API (CORS, endpoints, uploads, reports)
│ ├── services/ # risk_scoring, report_generator, graph_analysis, case_manager, ai_summarizer
│ ├── utils/ # data_loader (schemas), auth (mock/real)
│ └── generated-data/ # CSVs + AlertScores.csv (+ metadata.json if present)
├── frontend/
│ ├── src/pages/ # Dashboard, Triage, Investigation, Reporting, Settings
│ ├── src/services/api.js # API base resolver + token provider + endpoints
│ └── public/samples/ # Downloadable sample CSVs
├── data-generation/ # generate_data.py, patterns.py (synthetic data)
├── samples/ # Ready‑named CSVs to ZIP & upload (alerts guaranteed)
└── README.md
Backend environment:
FRONTEND_URL (for CORS), e.g., http://localhost:5173GEMINI_API_KEY (optional): for AI summariesFIREBASE_CREDENTIALS (JSON or base64 JSON) or GOOGLE_APPLICATION_CREDENTIALS (path) for FirestoreRISK_ALERT_THRESHOLD (optional, default 10)Frontend environment:
VITE_API_URL (optional): override API base; otherwise auto‑detects localhost http://localhost:5001/api or same‑origin /api in productionVITE_USE_MOCK_AUTH (optional): use mock auth in local developmentcaseId.1) Fork the repository.
2) Create a feature branch (git checkout -b feature/your-change).
3) Commit and push.
4) Open a pull request.
| Anurag Waskle | Soham S. Malvankar | Harshit Kushwaha | Aryan Pandey | Deepti Singh |
|---|---|---|---|---|
MIT © Project NETRA contributors