A systematic review: effect of hand, rotary and reciprocating instrumentation on endodontic postoperative pain
Aim: This systematic review evaluated the influence of hand, rotary and reciprocating instrumentation on endodontic postoperative pain.
Methodology: A protocol was registered on PROSPERO. Electronic searches were conducted in MEDLINE, ISI Web of Science, Scopus and ClinicalTrials.gov. Articles were selected according to the following criteria: randomized clinical trials with patients undergoing endodontic treatment in permanent teeth, comparing instrumentation techniques with different kinematics (hand/rotary/reciprocating) and their effect on postoperative pain incidence, intensity or duration. Data on analgesic intake was also recorded. Risk of bias was evaluated and the GRADE framework was applied to assess the quality of evidence.
Results: Twelve studies and 1,659 patients were included in this review. Five studies compared hand instrumentation vs. engine-driven (rotary and/or reciprocating) systems. In three studies, postoperative pain results were worse with hand instruments than with engine-driven systems. In the other two studies, pain results for hand and engine-driven techniques were similar. Seven studies and a dataset from one of the five previous studies were included in the comparison of rotary vs. reciprocating systems, with contrasting results. Postoperative pain results were worse with reciprocating systems in four studies, with rotary systems in two studies and equivalent in other two studies. Data on analgesic intake were controversial. GRADE showed low quality of evidence.
Conclusions: Hand instrumentation presented unfavourable postoperative pain results when compared to engine-driven systems. The comparison of rotary and reciprocating systems generate contrasting results. Given the low quality of evidence and conflicting findings, results should be considered with caution and further well-designed randomized clinical trials on the matter are encouraged.
PlumX Metrics provide insights into the ways people interact with individual pieces of research output (articles, conference proceedings, book chapters, and many more) in the online environment. Examples include, when research is mentioned in the news or is tweeted about. Collectively known as PlumX Metrics, these metrics are divided into five categories to help make sense of the huge amounts of data involved and to enable analysis by comparing like with like.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.