N
← Home/Research

Peer-reviewed.
Statistically validated.

Two IRJET papers (Impact Factor 8.226) and an MSc dissertation benchmarking 12 LLMs across 1,144 verified bugs and 4,320 evaluations with full statistical validation.

Dissertation

Open vs Closed LLMs vs Traditional Linters

A Comprehensive Empirical Analysis Across Four Critical Dimensions

Sep 2024 — Sep 2025

Dissertation: "Open vs Closed LLMs vs Traditional Linters: A Comprehensive Empirical Analysis Across Four Critical Dimensions" — supervised by Prof. Rami Bahsoon. Benchmarked 12 LLMs and 3 static analysis tools across 1,144 verified bugs using a novel four-dimensional framework. 4,320 primary evaluations with full statistical validation (ANOVA, Welch's t-test, Cohen's d). Key finding: open-source LLMs achieve 92.5% of premium model accuracy at zero cost.

12
LLMs benchmarked
1,144
Verified bugs
4,320
Evaluations
75/100
Distinction grade

Key finding

“Open-source LLMs achieve 92.5% of premium model accuracy at zero cost.”

Selected modules

Intelligent Software Engineering70
Advanced Networking67
Mobile & Ubiquitous Computing66
Human-Computer Interaction58
Designing & Managing Secure Systems57
Algorithms & Complexity56
Peer-reviewed papers
IRJET · IF 8.226
Paper № 01

BERT at the Barricades: Advanced AI Strategies for Combating Spam, Phishing, and Malicious URLs

November 2023

IRJET · Volume 10, Issue 11 · IF 8.226

e-ISSN 2395-0056 · p-ISSN 2395-0072

Peer-reviewed research applying BERT, DistilBERT, and RoBERTa transformer models for cybersecurity threat detection across three attack vectors: phishing emails, spam SMS, and malicious URLs.

Key results

Phishing email detection

DistilBERT fine-tuned classifier

99.36%

Malicious URL detection

BERT-base · 0.9611 precision · 0.9287 recall

94.5%

SMS spam detection

RoBERTa classifier

99.8%

Multimodal (BERT + CNN)

Combined text + image-based threats

94.5%

Inference latency

Real-time deployment ready

<50ms
PyTorchHuggingFace TransformersBERTDistilBERTRoBERTa
Read on IRJET →
Paper № 02

A Multimodal Approach to Emotion, Hate Speech, Sarcasm, and Slang Detection in Social Media Text

April 2024

IRJET · Volume 11, Issue 4 · IF 8.226

e-ISSN 2395-0056 · p-ISSN 2395-0072

B.Tech capstone project — BERT-based multimodal text classification system for four simultaneous NLP tasks on social media content. Supervised by Prof. Sheetal Shimpikar.

Key results

Emotion detection

0.91 F1 · 416,809 entries · 6 categories

91%

Hate speech detection

0.88 F1 · 3,000 labelled comments

88%

Sarcasm detection

0.98 F1 · 5,000 tweets

98%

Slang detection

17,600 sentences

98%

Outperformed SVM baseline

across all four tasks

+24%
PyTorchHuggingFace TransformersBERTMultimodal NLP
Read on IRJET →