project-NETRA

Project NETRA

███╗   ██╗███████╗████████╗██████╗  █████╗
████╗  ██║██╔════╝╚══██╔══╝██╔══██╗██╔══██╗
██╔██╗ ██║█████╗     ██║   ██████╔╝███████║
██║╚██╗██║██╔══╝     ██║   ██╔══██╗██╔══██║
██║ ╚████║███████╗   ██║   ██║  ██║██║  ██║
╚═╝  ╚═══╝╚══════╝   ╚═╝   ╚═╝  ╚═╝╚═╝  ╚═╝
An AI‑powered financial intelligence platform for detecting and investigating suspicious activity across accounts, persons, and companies. [![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE) [![Python](https://img.shields.io/badge/python-3.10%2B-blue.svg)](https://python.org) [![React](https://img.shields.io/badge/react-18.2%2B-blue.svg)](https://reactjs.org) [![Neo4j (optional)](https://img.shields.io/badge/neo4j-optional-green.svg)](https://neo4j.com) [![Deploy with Vercel](https://vercel.com/button)](https://netra-ai.vercel.app/) [![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://netra-8j8n.onrender.com)

Demo

Watch the demo on YouTube: https://youtu.be/r_G-eIlJKkU

Overview

Project NETRA provides a unified workflow for ingesting datasets (CSV/ZIP), calculating hybrid risk scores, inspecting networks, and generating AI‑assisted PDF reports. It ships with synthetic datasets and lets investigators upload data from the UI.

Highlights:

Architecture

flowchart LR
  %% Client Layer
  subgraph Client
    U[User]
    FE[Frontend - React + Vite]
  end

  %% Backend API and Services
  subgraph Backend
    API[REST API]
    RS[risk_scoring.py]
    RG[report_generator.py]
    GA[graph_analysis.py]
    CM[case_manager.py]
    AS[ai_summarizer.py]
  end

  %% Data Sources
  subgraph Data Sources
    CSV[CSV files<br/>backend/generated-data]
    NEO[Neo4j optional]
    FS[Firestore optional]
  end

  %% Client -> API
  U -->|actions| FE
  FE -->|Fetch JSON or PDF| API
  FE -->|Upload ZIP or CSV| API

  %% API -> Services
  API --> RS
  API --> RG
  API --> GA
  API --> CM
  API --> AS

  %% Services <-> Data
  RS --> CSV
  GA --> NEO
  CM --> FS
  RS -->|AlertScores.csv| CSV

  %% Responses
  RG -->|PDF| FE
  API -->|JSON| FE

Connection Flow

sequenceDiagram
    participant User
    participant FE as Frontend (React)
    participant API as Flask API (/api)
    participant Risk as Risk Scoring
    participant Graph as Graph Analysis
    participant Report as Report Generator
    participant Store as Firestore (optional)
    participant Neo4j as Neo4j (optional)
    participant CSV as CSV Data

    User->>FE: Open app / Login
    FE->>API: GET /alerts (Bearer token)
    API->>Risk: Load scores
    Risk->>CSV: Read AlertScores.csv
    API-->>FE: Alerts JSON

    User->>FE: Upload dataset (CSV/ZIP)
    FE->>API: POST /datasets/upload
    API->>CSV: Replace files
    API->>Risk: Re-run analysis
    API-->>FE: Upload OK

    User->>FE: Create case
    FE->>API: POST /cases
    API->>Store: Create/Update case (if configured)
    API-->>FE: Case created (caseId)

    User->>FE: Investigate person/case
    FE->>API: GET /graph/:personId
    API->>Graph: Build graph
    Graph->>Neo4j: Query (if available)
    Graph->>CSV: Synthesize fallback
    API-->>FE: Graph JSON

    User->>FE: Generate report
    FE->>API: GET /report/:id
    API->>Report: Compile PDF
    Report->>Risk: Pull scores
    Report->>CSV: Fetch details
    API-->>FE: PDF (blob)

    FE-->>User: Render views / Download report

Detailed Application Flow

flowchart TD
    Start([User opens app]) --> AuthCheck[Auth check via AuthProvider]
    AuthCheck -->|Authenticated| GoDashboard[Route to /dashboard]
    AuthCheck -->|No auth| GoLogin[Route to /login]

    subgraph Dashboard
      GoDashboard --> FetchAlerts[GET /alerts]
      FetchAlerts --> ShowAlerts[Render alerts & metrics]
      ShowAlerts --> ActionTriage[Open Triage]
      ShowAlerts --> ActionInvestigate[Open Investigation]
      ShowAlerts --> ActionReporting[Open Reporting]
    end

    subgraph Triage
      ActionTriage --> CreateCase[POST /cases]
      CreateCase --> CaseCreated[(caseId)]
      CaseCreated --> NavWorkspace[Go to /workspace/:caseId]
    end

    subgraph Investigation_Workspace
      ActionInvestigate --> LoadGraph[GET /graph/:personId]
      NavWorkspace --> LoadGraph
      LoadGraph --> ViewGraph[React Flow graph + details]
      ViewGraph --> UpdateNotes[PUT /cases/:id/notes]
  UpdateNotes --> NotesSaved[Notes saved - Firestore or local]
    end

    subgraph Reporting
      ActionReporting --> GetReport[GET /report/:id]
      GetReport --> PDF[PDF blob]
      PDF --> Download[Trigger download]
    end

    subgraph Settings_and_Datasets
      Settings[Open Settings] --> Upload[POST /datasets/upload - CSV or ZIP]
      Upload --> Reanalyze[Run analysis]
      Reanalyze --> AlertsUpdated[Updated AlertScores.csv]
      AlertsUpdated --> FetchAlerts
    end

  GoLogin --> LoginFlow[Login - Firebase or mock token]
    LoginFlow --> AuthCheck

Backend (Flask):

Frontend (React + Vite):

Data generation:

Quick Start (Local)

Prerequisites: Python 3.10+, Node 18+.

Backend: 1) cd backend 2) python -m venv venv (Windows: venv\Scripts\activate, macOS/Linux: source venv/bin/activate) 3) pip install -r requirements.txt 4) Set environment variables (optional but recommended):

Frontend: 1) cd frontend 2) npm install 3) Optional: set VITE_API_URL to your backend API base (e.g., http://localhost:5001/api). If unset, it auto‑detects localhost and uses http://localhost:5001/api. 4) npm run dev (http://localhost:5173)

Authentication (local/mock):

Dataset Uploads (CSV/ZIP)

Upload via the UI: Settings → Data Management.

Schemas (minimal required columns):

Sample data that triggers alerts:

Key Endpoints (Backend)

Base path: /api.

Auth:

Reporting

Data Generation

Project Structure

project-NETRA/
├── backend/
│   ├── app.py                 # Flask API (CORS, endpoints, uploads, reports)
│   ├── services/              # risk_scoring, report_generator, graph_analysis, case_manager, ai_summarizer
│   ├── utils/                 # data_loader (schemas), auth (mock/real)
│   └── generated-data/        # CSVs + AlertScores.csv (+ metadata.json if present)
├── frontend/
│   ├── src/pages/             # Dashboard, Triage, Investigation, Reporting, Settings
│   ├── src/services/api.js    # API base resolver + token provider + endpoints
│   └── public/samples/        # Downloadable sample CSVs
├── data-generation/           # generate_data.py, patterns.py (synthetic data)
├── samples/                   # Ready‑named CSVs to ZIP & upload (alerts guaranteed)
└── README.md

Configuration

Backend environment:

Frontend environment:

Notes

Contributing

1) Fork the repository. 2) Create a feature branch (git checkout -b feature/your-change). 3) Commit and push. 4) Open a pull request.

Group Members

Anurag Waskle Soham S. Malvankar Harshit Kushwaha Aryan Pandey Deepti Singh
Anurag Waskle Soham S. Malvankar Harshit Kushwaha Aryan Pandey Deepti Singh

License

MIT © Project NETRA contributors