Why do a thesis at all?
Because it makes you think like a doctor and a scientist at the same time. A thesis forces you to:
- Turn a clinical question into one you can measure.
- Design a method that gives a trustworthy answer.
- Learn how to collect and interpret data without fooling yourself.
It is not about novelty for novelty’s sake. It is about learning a process. If you learn the process well, you can repeat it any time.
What makes a good thesis topic? The three rules
- Feasible — you can finish it with resources and time available. Retrospective designs are often easiest.
- Focused — answer one clear question. Don’t try to do everything.
- Valuable — it should matter to clinicians or add a measurable diagnostic or prognostic insight.
How to choose (practical checklist)
- Scan for cases you actually see daily. If your hospital does a lot of head trauma, a TBI DTI study is better than a rare PET tracer project.
- Match modality to skill. If you are comfortable with ultrasound, pick a sonography project. If not, you will waste time learning new techniques.
- Check data availability. If you need histopathology as a gold standard, confirm that reports are retrievable.
- Talk to the potential guide and ask directly: have you supervised this type before? How many similar projects have you seen succeed?
- Estimate time. Simple retrospective imaging vs HPE correlation usually needs months, not years.
Turn a topic into a one-sentence question
Example topic from Set 1: “Diffusion tensor imaging metrics as predictors of 6-month outcome after moderate-severe traumatic brain injury.”
One-sentence question:
In adults with moderate to severe traumatic brain injury, do fractional anisotropy values in the corticospinal tract on admission DTI predict modified Rankin Scale at 6 months?
If you can write this one sentence, you already understand your study’s core.
Minimal protocol blueprint — the parts that matter
- Objective (single primary objective).
- Design (retrospective cohort, prospective cohort, case-control).
- Population (inclusion/exclusion; dates; single center).
- Index test(s) (what you measure on imaging and how).
- Reference standard / outcome (histology, clinical score, surgery, mortality).
- Sample size (rough estimate — even an approximate calculation will guide feasibility).
- Analysis plan (primary statistic, confounder adjustment).
- Timeline (realistic months to finish each step).
Example mini-protocol (DTI in TBI)
- Objective: To test whether admission FA in ipsilateral corticospinal tract predicts 6-month mRS.
- Design: Retrospective cohort, single center, patients admitted 2019–2023.
- Population: Adults 18–65 with GCS 3–12, admission MRI with DTI within 7 days, follow-up mRS available at 6 months. Exclude prior stroke, severe comorbidity.
- Measurements: FA and MD in defined ROI on CST; two independent readers; mean of both.
- Outcome: mRS dichotomized (0–3 good, 4–6 poor).
- Sample size: Pilot estimate: 80 patients to detect moderate effect (this is a start; run a formal calculation with expected effect size).
- Analysis: Logistic regression with FA as predictor, adjust for age and initial GCS. ROC curve for discriminatory performance.
- Timeline: Data collection 2 months, image processing 1 month, stats 1 month, write-up 1–2 months.
That is short, practical, and makes the work measurable.
Common problems and how to avoid them
- Too broad a question. Fix: split into primary and exploratory aims, but only power the primary one.
- Unavailable gold standard. Fix: choose a surrogate outcome that is routinely recorded. State its limitations clearly.
- Poor measurement reproducibility. Fix: define ROI and measurement protocol, do inter-rater reliability on a subset.
- Underpowered study. Fix: run a pilot and calculate effect sizes. If sample insufficient, convert to descriptive or feasibility paper. That is publishable too.
Data and stats essentials — the Feynman checklist
- Always start with a data dictionary. Know exactly what each variable means.
- For continuous imaging metrics, check distribution. If skewed, use median or log-transform.
- For primary outcome, define it clearly and stick to it. Don’t swap outcomes after seeing results.
- Use confidence intervals, not only p-values. CI tells you the plausible range for the effect.
- If you are unsure about stats, consult a statistician before analysis, not after.
Writing and defending the thesis
- Write the introduction as a short argument: what is known, gap, and how your study fills it.
- Methods should be reproducible. Assume the reader is doing your work from your text.
- In results, be honest. Negative or null results are science too if the methods were sound.
- Anticipate viva questions: limitations, bias sources, how generalizable are your results, what would you do next.
Publication mindset
Design with a target journal in mind. If data are limited, aim for a specialty journal or a methods/feasibility paper. If the results are strong, push for higher impact. Either way, publish — your thesis should leave the hospital as a paper.
Quick path to finish (30-60-90 day sprint)
- Days 0–30: finalize question, get ethical approval, build list of cases.
- Days 31–60: extract images and measures, start blinded reads, create dataset.
- Days 61–90: finalize stats, write introduction and methods. Submit to guide for edits.
This is an optimistic plan for a retrospective study with available data. Adjust realistically for your workload.
Tools and small tricks
- Use PACS bulk export and simple spreadsheets to collect variables.
- Predefine ROI and save a screenshot template for readers.
- Use free stats tools like R or ask biostatistics department for support.
- Keep a daily log of hours spent and steps done. Progress keeps momentum.
One last thing: pick a mentor who will push and protect you
A good guide helps with feasibility, access to data, and realistic expectations. Find a mentor who responds, sets deadlines, and gives concrete feedback.
Procedure Pearls — practical viva and exam tips
- Always state one primary objective up front. Viva will ask this first.
- Memorize exact inclusion/exclusion criteria and sample size rationale. If you modified them, explain why transparently.
- Know the limitations and biases: selection, information, and confounding. Say how you mitigated them.
- Be ready to explain one key figure or table in under two minutes. That will prove you own your data.
- Practice the 90-second elevator pitch: background, question, method, key result, one sentence implication.