Expert in field to review includes
The handbooks focus on identifying contact details and considering how to request studies or study data [ 9 , 10 , 13 ]. The studies evaluate the effectiveness of methods to make contact and elicit a response. Six empirical studies were included [ 6 , 14 – 18 ].
It is used for identifying unpublished or on-going studies [ 10 ]; identifying missing, incomplete, discordant or unreported study data, or completed but unpublished studies [ 9 , 13 , 14 , 16 – 18 ]; and asking study authors (or topic experts) to review a list of studies included at full text in a review, to see whether any studies had been inadvertently overlooked [ 9 , 10 ].
Two handbooks and one study provided detail on identifying contact details [ 6 , 9 , 10 ]. The Cochrane Handbook suggests that review authors should contact the original investigators, identifying contact details from study reports, recent publications, staff listings or a search of the internet [ 13 ]. Colleagues, relevant research organisations and specialist libraries can also be a valuable source of author information and contact details [ 9 , 10 ]. A study by McManus et al. used a questionnaire, primarily to request study data or references, but also to ask recipients to recommend the names of other authors to contact [ 6 ]. A study by Hetherington et al. contacted authors and experts by letter in an attempt to identify unpublished trials [ 17 ].
Two studies reported using a multi-stage protocol to contact authors and request data: Selph et al. devised and followed a protocol that used both e-mail and telephone contact with the corresponding authors at defined stages over a period of 15 days [ 16 ]. Gibson et al. devised a similar protocol, although focused on e-mail contact, targeting first the corresponding authors and finally the last author and statisticians by e-mail and then telephone (statisticians were contacted due to the specific focus of the case study) [ 14 ]. Selph et al. contacted 45 authors and 28 (62%) provided study data [ 16 ], and Gibson et al. contacted 146 authors and 46 (31.5%) provided study data [ 14 ].
Two studies claimed that e-mail was considered an effective method of contact [ 14 , 15 ]. O’Leary reported a response rate of 73% using e-mail contact, finding that more responses were obtained from an institutional address compared to a hotmail address (86 vs 57%, p = 0.02) [ 15 ]. Conversely, Reveiz et al. achieved a 7.5% response rate from contacting 525 study authors to identify RCTs but identified 10 unpublished RCTs and links to 21 unregistered and on-going RCTs [ 18 ]. Gibson et al. found that e-mail was most likely to receive a reply when compared to letter (hazard ratio [HR] = 2.5; 95% confidence interval [CI] = 1.3–4.0) but that a combined approach of letter and e-mail, whilst generating a higher response rate, was not statistically different from e-mail alone (73 vs 47%, p = 0.36. One hundred forty-six authors were contacted overall and 46 responded) [ 14 ].
Hetherington et al. sent letters to 42,000 obstetricians and paediatricians in 18 countries in an attempt to identify unpublished controlled trials in perinatal medicine [ 17 ]. Responses were received from 481 individuals indicating they would provide details concerning unpublished studies, and 453 questionnaires were completed and returned which identified 481 unpublished trials [ 17 ].
Chapter Seven of The Cochrane Handbook offers guidance on how to set out requests for studies or study data when contacting study authors [ 13 ]. The guidance suggests considering if the request is open-ended, or seeking specific information, and whether (therefore) to include a (uncompleted or partially completed) data collection form or request specific data (i.e. individual patient data) [ 13 ]. McManus et al. evaluated the use of a questionnaire to identify studies, study data and the names of relevant authors to contact for a systematic review [ 6 ]. The questionnaire resulted in the identification of 1057 references unique to the review, but no unpublished data were offered [ 6 ].
Two handbooks recommend submitting a list of included studies to authors [ 9 ] or topic experts [ 10 ] to identify any potentially missing studies. The Cochrane Handbook suggests including the review’s inclusion criteria as a guide to authors [ 9 ].
Five studies claimed that identifying additional published or unpublished studies, study data or references is possible by contacting study authors [ 6 , 14 , 16 – 18 ]. McManus et al. identified 23 references (out of 75 included in the review overall) by contacting study authors [ 6 ]; Reveiz et al. identified 10 unpublished RCTs and 21 unregistered or on-going RCTs [ 18 ]; two studies stated that they identified additional study data but did not separate their findings from contacting study authors from other methods of study identification [ 14 , 16 ]; and Hetherington et al. identified 481 unpublished trials by contacting 42,000 obstetricians and paediatricians in 17 countries [ 17 ].
O’Leary found that more detailed study information was provided as a result of contacting study authors [ 14 ].
The CRD handbook claims that contacting authors/experts offers no guarantee of obtaining relevant information [ 10 ]. Selph et al. found that, whilst identifying additional studies or study data is possible, contacting study authors is challenging and, despite extensive effort, missing data remains likely [ 16 ].
Hetherington et al. claimed that methodologically sound trials were not reported through author contact, even by the investigators responsible for them. This was attributed, anecdotally, to the possibility that the trials yielded results that the investigators found disappointing [ 17 ].
Reveiz et al. reported low response rates. Of 525 study authors contacted, only 40 (7.5%) replied [ 18 ].
Two studies and one handbook claimed that contacting authors/experts is time consuming for researchers [ 10 , 14 , 16 ]. Selph et al. noted that this method is time consuming for the study authors too, who must identify the data requested [ 16 ].
Gibson et al. claimed that contacting authors/experts may be less successful for older studies, given the increased possibility that authors’ contact details are out of date [ 14 ]. Gibson et al. reported a 78% (CI = 0.107–0.479) reduction in the odds of response if the article was 10 years old or older [ 14 ].
Gibson et al. claimed that additional resources were required to undertake author contact [ 14 ]. No specific details of the costs or time implications were recorded.
Gibson et al. recorded the duration between the information request and response [ 14 ]. This averaged 14 ± 22 days (median = 6 days) and was shortest for e-mail (3 ± 3 days; median = 1 day) compared to e-mail plus letter (13 ± 12 days; median = 9 days) and letter only (27 ± 30 days; median = 10 days) [ 14 ].
Selph et al. reported that all authors who provided data did so by the third attempt, suggesting that repeated attempts to elicit studies or study data may be ineffective [ 16 ].
The handbooks provide a brief overview of the method and list some of the tools commonly used [ 9 , 10 ]. The studies typically evaluate the effectiveness of the tools used to undertake the search methods. Nine studies assessing the use of citation chasing were included [ 1 – 3 , 19 – 24 ].
It is used for identifying further studies, and clusters or networks of studies, that cite or are cited by a primary study [ 10 ].
Two studies provided detail on the application of the search method [ 1 , 3 ]. The studies noted that backward citation searching is undertaken by reviewing bibliographies of relevant or included studies and forward citation chasing is undertaken by checking if a study, already known to be relevant, has since been cited by another study [ 1 , 3 ].
Three tools for electronic citation searching dominate the studies: Web of Science, Scopus and Google Scholar. The first two are subscription databases, and Google Scholar is presently free [ 19 ].
Four studies claimed that an advantage of citation chasing is that it is not limited by keywords or indexing as is bibliographic database searching [ 2 , 3 , 20 , 21 ]. Accordingly, four studies claimed the following advantages: Robinson et al. claimed that a small initial number of studies can create a network [ 21 ]; Hinde et al. claimed that citation searching can help inform researchers of parallel topics that may be missed by the focus of bibliographic database searches [ 2 ]; Janssens and Gwinn claimed that citation searching may be valuable in topic areas where there is no consistent terminology, so searches focus on links between studies rather than keywords [ 20 ]; and Papaioannou et al. reported that citation searching facilitated ‘serendipitous study identification’ due to the unstructured nature of citations [ 3 ].
One study appraised the quality of the studies identified through citation searching (and by other search methods) [ 3 ]. Papaioannou et al. reported that citation searching identified high-quality studies in their case study, although they do not define which quality appraisal tool was used to appraise study quality, so it is not clear if this observation is empirically derived [ 3 ].
Three studies stated that citation searching is reliant on the currency, accuracy and completeness of the underlying citation network [ 1 , 21 , 22 ]. Levay et al. identified ‘linking lag’, namely the delay between a study being cited and the citation being recorded in a citation database, which impacts on the currency of results [ 1 ]; Janssens and Gwinn stated that the accuracy and efficiency of citation searching depends on study authors citing studies, which means that selective citation of studies could cause relevant studies to be missed in citation searching [ 20 ]; Robinson et al. reported limited returns from citation searching where ‘broken citation links’ created ‘island’ studies which makes for incomplete citation networks and study identification [ 21 ].
Two studies questioned the efficiency of citation searching [ 2 , 22 ]. Wright et al. screened 4161 studies to identify one study (yield rate of 0.0002) [ 22 ], and Hinde et al. screened 4529 citations to identify 76 relevant studies (yield rate of 0.0168) [ 2 ]. Wright et al. specifically recorded the time to undertake citation chasing in their study (discussed below in resource use), [ 22 ] whereas Hinde et al. did not report the time taken to search but state that the search was ‘very time consuming’ [ 2 ].
Two studies claimed that replicability of citation searching strategies could be affected by the choice of the tools used [ 1 , 24 ]. Levay et al. questioned the replicability of Google Scholar, since search returns are controlled by Google’s algorithm, meaning that the results returned will change over time and cannot be replicated [ 1 ]. Bramer et al. found reproducibility of citation searching to be low, due to inaccurate or incomplete reporting of citation search strategies by study authors [ 24 ].
Two studies recorded the time taken to citation search, and one study commented on the time needed [ 1 , 3 , 22 ]. Levay et al. reported that citation searching the same 46 studies in Web of Science and Google Scholar took 79 h (Web of Science = 4 h and Google Scholar 75 h) to identify and de-duplicate 783 studies (Web of Science = 46 studies and Google Scholar = 737 studies) [ 1 ]. Wright et al. reported that citation chasing the same 40 studies in Web of Science, Medline, Google Scholar and Scopus took 5 days in total (2 days to download 1680 results from Google Scholar; 1 day to download 2481 results from Web of Science, Scopus and Medline; and 2 days to screen all the studies) [ 22 ]. Both studies commented on the administrative burden of exporting studies from Google Scholar which accounted for the majority of time searching in both cases [ 1 , 22 ]. Conversely, Papaioannou et al. claimed reference tracking and citation searching to be minimally time intensive, yielding unique and high-quality studies. The number of studies citation chased, the time taken to search and the tool used to appraise study quality were not reported [ 3 ].
One study provided data on the costs involved in citation chasing [ 1 ]. Levay et al. reported that the staff time to search Web of Science for 4 h cost between £88 and £136 and the 75 h to search Google Scholar cost between £1650 and £2550, based on staff grades ranging from £22–£34 per hour (all UK Sterling: 2012) [ 1 ].
The handbooks focus on where to handsearch [ 9 , 10 ], and they provide guidance on who should do this [ 9 ]. The studies have a similar focus but they have sought to evaluate effectiveness compared with other search methods [ 25 – 28 ] as well as to evaluate the effectiveness and/or the efficiency of handsearchers in identifying studies [ 29 , 30 ]. Twelve studies were included [ 25 – 36 ].
It is used for ensuring the complete identification of studies or publication types that are not routinely indexed in, or identified by, searches of bibliographic databases, including recently published studies [ 10 ].
Handsearching involves a manual, page-by-page, examination of the entire contents of relevant journals, conference proceedings and abstracts [ 9 , 10 , 27 , 31 ].
Two handbooks and six studies provide detail on selecting journals to handsearch [ 9 , 10 , 25 , 27 , 30 – 33 ]. Three strategies were identified, as set out below.
The handbooks suggest that bibliographic databases can be used to identify which journals to handsearch [ 9 , 10 ]. The Cochrane Handbook, with its focus on identifying studies reporting randomised controlled trails (RCTs), suggests that searches of The Cochrane CENTRAL database, MEDLINE and EMBASE can be used to identify journals that return the greatest number of studies by study design in the relevant topic area of research [ 9 ]. Variations of this approach to selecting journals to handsearch were utilised in three studies [ 25 , 30 , 31 ]. The CRD Handbook suggests analysing the relevant results of the review’s bibliographic database searches in order to identify journals that contain the largest number of relevant studies [ 10 ].
The Cochrane Handbook suggests that journals not indexed in MEDLINE or EMBASE should be considered for handsearching [ 9 ]. A study by Blümle et al. considered this strategy necessary to obtain a complete search [ 32 ].
Two studies contacted experts to develop a list of journals to handsearch [ 30 , 31 ]. Armstrong et al. contacted organisations to develop a list of non-indexed journals to handsearch (in addition to database searching), and Langham et al. used a combination of database searches, contacting organisations and searches of library shelves to identity relevant journals (in addition to database searching) [ 30 , 31 ]. A list of possible journals to handsearch was provided to professional contacts to appraise and identify any missing journals [ 30 ]. Neither study specifically reports the number of journals identified by experts to handsearch, when compared to the number of journals to handsearch identified by database searching, and there is no discussion of the effectiveness of either method in identifying journals to handsearch.
Five studies explored specifically where or which sections of a journal to handsearch [ 25 , 27 , 28 , 31 , 33 ]. A study by Hopewell et al. handsearched full reports, short reports, editorials, correspondence sections, meeting abstracts and supplements [ 27 ]. Hopewell et al. found that, of the 369 reports uniquely identified by handsearching, 92% were abstracts and/or published in the supplement of journals [ 27 ]; two studies reported the greatest value in searching supplement editions of journals [ 28 , 31 ], since these are not routinely indexed in databases [ 28 ]. Armstrong et al. identified three studies (out of 131) through searching supplement editions of journals [ 31 ], and Jadad et al. identified 162 eligible RCTs from a total of 2889 abstracts reported in four journals [ 28 ]; Croft et al. claimed value in searching the correspondence section of journals but they did not record the effect of handsearching this section in terms of identification of studies [ 33 ]; and Adams et al. reported handsearching book reviews and identifying one study [ 25 ].
Table 3 summarises a claimed advantage of handsearching, since the studies demonstrate that handsearching identifies studies missed through database searching. Where the studies reported the reason that the studies were missed by database searching (the advantage of handsearching), these are summarised in Table 3 .
Handsearching results
Advantages | Disadvantages | |||
---|---|---|---|---|
Studies | No. identified by handsearching but missed by MEDLINE | Why studies were missed by MEDLINE (claimed advantages of handsearching) | No. identified by MEDLINE but missed by handsearching | Why studies were missed by handsearching (claimed advantages of database searching) |
Adams et al. [ ] | 9% (67 out of 698) of RCTs (CI 7–11%; Sensitivity 94% (CI 93–95%);Precision 7% (CI 6–8%). | ▪ Conference abstracts and letters not indexed in databases; ▪ RCTs not indexed, or no methodological data available to identify studies; ▪ Methodological descriptors (i.e. ‘random’ for allocation) were overlooked by database indexers. | : sensitivity 18% (CI 15–21%) Precision 40% (CI 35–45%) Sensitive: 52% (CI 48–56%) Precision 59% (CI 55–65%) | ▪ Studies missed by searcher error/fatigue; ▪ Methodological data being ‘hidden’ in article |
Armstrong et al. [ ] | 6 out of 131 (4.6%) RCTs/CCTs | ▪ Trials made no reference in abstract, title or subject headings to random allocation; ▪ Trials used terms for random allocation it the title, abstract or MeSH but were not correctly indexed by publication type; ▪ Trials were abstracts; ▪ Studies were identified in supplement editions of journals not indexed in MEDLINE; and ▪ Not found in MEDLINE as issue appeared missing in MEDLINE. | 125 (of 131) studies have been identified by a MEDLINE using PICO search. 118 (of 131) have been identified by a PICOs search | Not reported |
Blümle and Antes[ ] | 10, 165 RCTs/CCTs out of 18,491(55%) | ▪ Incorrect indexing and incomplete compilation of health care journals in electronic databases impair result of systematic literature search. | Not reported in abstract | Not reported in abstract |
Croft et al. [ ] | 7 out of 10 (70%) | ▪ Two RCTs identified through letter to editors ▪ Not picked up in MEDLINE search | 3 studies identified in MEDLINE (30%) | Not reported |
Glanville et al. [ ] | 7 out of 25, although none of these studies met the review’s inclusion criteria. | Not reported | Electronic searching (including reference checking), by comparison, yielded 30 included papers. | Not reported |
Hay et al. [ ] | 5 of 40 studies identified (compared to EMBASE) or 13 of 40 (compared to PsycLIT) | Not reported | EMBASE = 35 (out of 40) RCTs (88%) and precision 9%. PsycLIT = 27 (out of 40) and precision 9%. | EMBASE: ▪ = 3 journal years not indexed ▪ = 2 reason unclear. PsycLIT: ▪ = 13 gap in indexing and current material being loaded. |
Hopewell et al. [ ] | 369 out of 714 (52%) RCTs | ▪ 252/369 (68%) no MEDLINE record. ▪ 232/252 (92%) abstracts and/or published in supplements. | 32 of 714 (4%) | |
Jadad et al. [ ] | vs 25 out of 151 (16.5%)precision 2.7% vs 150 out of 162 eligible (precision 5.6%). | ▪ Non-indexed abstract (n = 7); ▪ Non-indexed letter ( = 1); ▪ Search term random not in MeSH or abstract summary (n = 9); ▪ Key search term not in MeSH or in abstract summary ( = 7); and ▪ No apparent reason (n = 1). | vs 2 of 245 (0.8%) vs 1 out of 13 (7.6%) | ▪ Why studies were missed by handsearching is not reported or explored |
Langham et al. [ ] | 227 out of 710 (32%) | Not reported | MEDLINE identified 118 (16.6%) of studies missed by Handsearching. | Not reported |
Mattioli et al. [ ] | 0 out of 25 (0), (all identified by handsearching) | Not reported | 16 out of 25 (64%) 9 out of 25 (36%) | Not reported |
Milne and Thorogood [ ] | 34 out of 82 (41.5%) | Not reported | Capture/recapture used to test. Estimated = 3 missed by handsearching. | ▪ Inadequate indexing or trials not indexed on MEDLINE ▪ Prohibits are not located by computerised searches |
Table 3 also summarises a claimed disadvantage of handsearching since, even though this method is often defined as a ‘gold standard’, the studies demonstrate that database searching can identify studies missed by handsearching. Where the studies reported the reason that the studies were missed by handsearching (the disadvantage over database searching), these are summarised in Table 3 .
Two studies claimed that the precision of handsearching was low when compared to the precision found in database searching [ 25 , 28 ]. Table 3 records the relative precision between handsearching and MEDLINE searching. Two studies claimed that the time needed to handsearch, and access to resources (including handsearchers), was a disadvantage of handsearching [ 31 , 36 ].
Seven studies reported detail on the time taken to handsearch [ 25 , 28 , 29 , 31 , 33 , 34 , 36 ]. There was no agreement between the studies on how long handsearching takes. The range was between 6 min [ 36 ] and 1 h [ 29 ] per journal handsearched. It is not possible to calculate an average, since not all studies reported their handsearching as time per journal handsearched. One study reported handsearching in ‘two hour bursts’ across 3 months in order to focus concentration, but the detail of how often these ‘bursts’ occurred and the effectiveness relative to ‘non-burst’ handsearching is not reported [ 33 ].
Jadad et al. reported the time taken specifically to handsearch the supplement editions [ 28 ]. Two thousand, eight hundred and eighty-nine abstracts were handsearched in 172 min with an average of 1.1 min per eligible study identified [ 28 ].
The use of volunteers [ 29 , 30 ] or experienced handsearchers [ 27 , 31 ] varied in studies. Due to the varied outcome measures used between the studies, it is not possible to aggregate the effectiveness of experienced handsearchers against volunteers. Moher et al., however, specifically sought to test the effectiveness of volunteers in identifying RCTs, finding that volunteers with minimal training can contribute to handsearching [ 29 ]. Conversely, a study by Langham et al. discussed a possible explanation of their volunteer handsearcher missing studies was a lack of specific knowledge to identify RCTs [ 30 ], which suggests experience or training is necessary. Milne and Thorogood suggested that handsearching may need to be undertaken by more than one person [ 36 ].
Five studies provided data on training given to handsearchers [ 25 , 27 , 29 , 30 , 34 ]. This included specific training on RCTs [ 27 , 29 ], a 2-h training session [ 29 , 34 ] and an information pack including guidelines to handsearching, developed by experienced handsearchers, and a thesaurus of terms to identify RCTs [ 30 ].This data was reported narratively, and supporting information, such as the information pack reported in the study by Langham et al., was not provided in the studies. [ 30 ].
Two studies provided guidance on approaches to handsearching if resources were limited [ 27 , 28 ]. Hopewell et al. claimed that, where resources are limited (and it was accepted that studies would be missed), and the aim of searching is the comprehensive identification of studies reporting RCTs, handsearching is best targeted on journals not indexed in MEDLINE and journals published before 1991 (the year the publication type indexing term for RCTs was introduced into MEDLINE [ 37 ]) [ 27 ]. Jadad et al., in a study focused on identifying RCTs, claimed that a combination of MEDLINE searches with selective handsearching of abstracts of letters may be a good alternative to comprehensive handsearching [ 28 ].
Armstrong et al. claimed that researchers handsearching for non-randomised study designs may need more time to handsearch. No guidance on speculative timing was given [ 31 ].
Moher et al. provided data on costs. Moher et al. recorded costs for photocopying (10–15 Cents Canadian per page) and car parking (10 Dollars Canadian) in their 1995 study assessing the use of volunteers to handsearch [ 29 ].
The handbooks focus on the benefit of searching registers [ 10 ], with The Cochrane Handbook providing specific guidance on where to search [ 9 ]. The studies focused on the searching of the registers [ 38 ] and the advantages and disadvantages of doing so [ 39 , 40 ]. Three studies were included [ 38 – 40 ].
It is used for identifying unpublished, recently completed or on-going trials [ 9 , 10 , 39 , 40 ] and keeping a track of any adaptations to trial protocols and reported study outcomes [ 39 , 40 ]. Trials that have been stopped, or were unable to reach optimal recruitment, can also be identified.
The Cochrane Handbook includes a comprehensive list of trial registers to search [ 9 ]. Distinctions are made between national and international trial registers (which hold trials of any population or intervention), subject (i.e. population)-specific registers and pharmaceutical/industry trial registers [ 9 ]. There is a further distinction between on-going, completed trial registers and result registers. Glanville et al. also drew a distinction between trial registers (e.g. ClinicalTrials.gov) and portals to trial registers (e.g. WHO) [ 38 ].
Glanville et al. explored the need to search trial registers as a complementary search method to comprehensive searches of bibliographic databases [ 38 ]. Glanville et al. reported that, in both ClinicalTrials.gov and WHO International Clinical Trials Registry Platform (ICTRP), their ‘highly sensitive single concept search’ of the basic interface offered the greatest reliability in identifying known records. The methods of searching are explored in greater detail in this study [ 38 ].
Two studies claimed that searching trial registers will identify unique studies or study data [ 39 , 40 ]. Van est. et al. reported that, in four out of 80 Cochrane reviews included in their study, primary studies were identified and included from a prospective search of a trial register search [ 39 ]. Jones et al. reported that, of 29 studies to record registry search results in their study, 15 found at least one relevant study through searching a register [ 40 ].
Two studies claimed that searching of trial registers facilitates checking of a priori outcome measures against reported final outcome measures [ 39 , 40 ]. Jones et al. suggested that the comparison of registered trials (and trial data) against published trials (and data) will aid the understanding of any potential bias in the trials [ 40 ].
Jones et al. noted that an advantage of trial registers is that they often include contact details for trial investigators, thereby facilitating author contact [ 40 ].
Two studies concluded that trial registers must be searched in combination with other bibliographic resources [ 38 , 39 ]. Glanville et al. concluded that trial registers lag behind major bibliographic databases in terms of their search interfaces [ 38 ].
None were reported.
The handbooks report limited guidance for web searching. The CRD Handbook suggests that web searching may be a useful means of identifying grey literature [ 10 ], and The Campbell Handbook provides some guidance on how to undertake web searches, including a list of grey literature websites [ 11 ]. The studies explored the role of web searching in systematic reviews. Five studies were included [ 41 – 45 ].
It is used for identifying published or unpublished studies not indexed or included in bibliographic databases, or studies missed by database (or other) search methods, identifying and retrieving grey literature and identifying study protocols and on-going studies [ 10 , 11 , 42 , 45 ].
The CRD Handbook makes a separation between a search of the internet through a ‘search engine’ and searches of specific and relevant websites [ 10 ]. It considers the latter to be more practical than a general search of the World Wide Web in systematic reviews [ 10 ].
The Campbell Handbook provides guidance on searching using a search engine [ 11 ], and Eysenbach et al. reported the results of a pilot study to assess the search features of 11 search engines for use in searching for systematic reviews [ 45 ]. 1 The Campbell Handbook suggests that, when using search engines, researchers should use the advanced search function. In some cases, this allows searchers to use Boolean logic and employ strategies to limit searches, such as precise phrases like “control group” [ 11 ].
Godin et al. reported the development and use of a web-searching protocol to identify grey literature as part of study identification in a systematic review [ 43 ]. Godin et al. broke their web searching into three parts: first, searches using Google for documents published on the internet; secondly, searches using custom Google search engines; and thirdly, browsing targeted websites of relevant organisations and agencies [ 43 ].
Two studies identified studies uniquely by web searching [ 43 , 45 ]. Eysenbach et al. identified 14 unpublished, on-going or recently finished trials, and at least nine were considered relevant for four systematic reviews [ 45 ]. Godin et al. identified 302 potentially relevant reports of which 15 were included in their systematic review [ 43 ].
Three studies commented on the types of study or study data identified [ 42 , 43 , 45 ]. Eysenbach et al. claimed that internet searches may identify ‘hints’ to on-going or recently completed studies via grey literature [ 45 ]; Godin et al. uniquely identified report literature [ 43 ]; and Stansfield et al. suggested that web searching may identify studies not identified from ‘traditional’ database searches [ 42 ].
Five studies discussed the disadvantages of web searching [ 41 – 45 ]. The studies drew illustrative comparisons between database searching and web searching in order to highlight the disadvantages of web searching:
Three studies commented on searching using a web search engine: Eysenbach et al. reported that current search engines are limited by functionality and that they cover only a fraction of the visible web [ 45 ]; Mahood et al. claimed that their chosen search engines could not accommodate either full or modified search strategies, nor did they support controlled indexing [ 44 ]; and Godin et al. claimed that, in contrast to systematic searches of bibliographic databases, where one search strategy combining all search terms would be used, Google searches may require several search enquiries containing multiple combinations of search terms [ 43 ].
Three studies commented on the number of studies returned through web searching [ 43 – 45 ]. Godin et al. claimed that searching Google can be overwhelming due to the amount of information and lack of consistent organisation of websites [ 43 ]; Mahood et al. had to limit their web searches to title only in order to control search returns [ 44 ], and Eysenbach et al. recorded recall of between 0 and 43.6%, finding references to published studies and precision for hints to published or unpublished studies ranged between 0 and 20.2% [ 45 ].
Three studies commented on the search returns [ 42 , 43 , 45 ]. Eysenbach et al. and Stansfield et al. commented on the lack of abstracts when web searching, which impacts on the precision of web searching and volume of studies identified [ 42 , 45 ], and Godin et al. claimed that it was impossible to screen all results from a Google search, so researchers were reliant on page ranking [ 43 ].
Three studies claimed potential issues with the reliability of items identified through web searching [ 41 , 43 , 45 ]. Godin et al. discussed the possibility of bias created in web searching, where search results are presented depending on geographic location or previous search history [ 43 ]; Briscoe reported that algorithms used by search engines change over time and according to the user, which will influence the identification of studies and impact the transparency and replicability of search reporting [ 41 ]; and Eysenbach et al. reported identifying a study published on-line that differed in reporting to the copy published in the peer-reviewed journal, where adverse event data was omitted in the on-line version [ 45 ].
Stansfield et al. claimed that the lack of functionality to export search results presented a challenge to web searchers [ 42 ]. Three studies claimed that web searching presented difficulties in transparent search reporting [ 41 , 43 , 44 ].
Two studies discussed time taken to web-search [ 43 , 45 ]. Eysenbach et al. reported searching 429 returned search result pages in 21 h [ 45 ], and Godin et al. reports custom Google searching taking 7.9 h and targeted web searches taking 9–11 h, both timings being specific to the case studies in question [ 43 ].
Stansfield et al. discussed planning when to undertake web searching [ 42 ]. Stansfield et al. linked planning a web search to the time-frame and resources available in order to inform where to search [ 42 ].
Mahood et al. claimed that large yields of studies can be difficult and time consuming to explore, sort, manage and process for inclusion [ 44 ]. Mahood et al. initially had to limit their web searching to title only (as a method to control volume) before eventually rejecting their web searching due to concerns about reproducibility and ability to manage search returns [ 44 ].
No studies reported any data relating to the costs involved in web searching.
The discussion will focus on two elements inherent in the research question of this study: how does current supplementary search practice compare with recommended best practice and what are the implications of the evidence for searching using these supplementary methods.
The advent of e-mail (and more specifically the standardised reporting of e-mail addresses for corresponding study authors) would appear to have improved the efficiency of contacting study authors [ 10 , 11 ], although it is possible that it has not altered the effectiveness [ 46 ]. Identifying additional studies or data (the effectiveness) is conditional upon a reply, whatever the method of contact. The guidance of the handbooks, to consider how best to set out requests for studies or study data, is well made but seldom explored in the studies themselves. Whilst making contact is important, which the studies evaluate exploring techniques to improve the rate of reply would be a valuable contribution to improve the efficiency and effectiveness of identifying studies or study data through author contact.
When to contact study authors is worthy of consideration, since the studies included in this review reported a delay between asking for studies or study data and a response. Sufficient time should be allowed between identifying the need for author contact, making contact, a response being provided and the study or data being integrated into the review (with all the methodological implications considered). A recognition for the need of this method, combined with the realisation that this method takes time to yield results, is important. It is perhaps for this reason that, whilst contacting authors is common in systematic reviews, it is not a method of study identification that is undertaken as a matter of course [ 47 ].
The concept of contacting authors could also be understood more broadly than simply contacting with a view to requesting known studies or data. Whilst in contact with authors, requests for unpublished, linked or forthcoming studies are not unreasonable requests, and authors can assist with the interpretation of specific elements of studies or topics, in order to aid the process of critical appraisal. Furthermore, Ogilvie et al. found the value in contacting experts was the link to better reports of studies already identified [ 48 ]. This highlights the potential flexibility of the search method: it is not only the chance to identify known studies or study data but also it offers the opportunity to speak with experts.
The advantages and disadvantages (and resource requirements) were most clearly stated for this supplementary search method. The handbooks, and some studies, suggested and found advantages and disadvantages in the methods and tools.
The Cochrane Handbook suggested that there is little evidence to support the methodology of citation searching, since the citation of studies ‘is far from objective’ [ 49 ]. The studies included in this review suggested that the reasons for ‘non-citation’ are unclear and could range from selective citation (i.e. selective reporting) to pragmatic reasons, such as a review of trials being cited instead of each individual trial reviewed [ 21 ]. Furthermore, a high number of citations for a study should not necessarily be confused as an indicator of study quality [ 50 , 51 ] or a complete citation network. Non-citation of studies, or ‘linking lag’ [ 1 ], forces a break in citational networks [ 1 , 2 , 21 ], meaning it becomes unclear when (or if all) studies have cited a primary study [ 20 ]. There is presently no method to assess the completeness of citational networks and no certainty as to the comprehension of any citation chasing.
There is little common agreement between the studies as to which tool (or combination of tools) is superior in citation chasing, since the relative merits of each resource depend greatly upon the topic of review, the data range of the resource and the currency of the results (c.f. [ 1 , 23 , 24 , 52 – 54 ]). A study that evaluated the tools (Web of Science, SCOPUS and Google Scholar), how the tools are best searched, how the platform hosts select data for inclusion and the advantages and disadvantages of use would make clearer statements on when (or if) to use which tools.
There are, undoubtedly, advantages to citation searching. The citational link is neutral, in the sense that it only links the studies but it does not explain the nature of the link. This is important, since a citation search will identify any study linked to the primary study, including erratum studies and studies that dispute or disagree with the primary study, and it should also link different publication types, such as editorial content, reviews or grey literature. This could not only aid interpretation of studies but also it could help researchers explore the idea of study impact. Furthermore, as reported in the ‘ Results ’ section, a citation search links by citation and it is not beholden to the use of ‘the correct’ search terms or database indexing. It may, therefore, as Papaioannou et al. reported, facilitate serendipitous study identification [ 3 ], suggesting that citation chasing is valuable in scoping review topics, to aid development of searches, and review searches, in order to ensure all studies have been identified.
The nature of bi-directional citation chasing suggests that, given the relative specificity, this method could possibly be used to efficiently update systematic reviews using known includes as the citations to chase [ 20 ]. Researchers have had positive, although incomplete, success trialling this method, and studies suggest that citation chasing alone is not a substitute for standard update searches [ 55 , 56 ].
The evidence on handsearching can be summarised as (1) selecting where to handsearch, (2) what to handsearch and (3) who does the handsearching. In relation to 1, the handbooks advocate selecting journals to handsearch on the basis of the number of relevant studies included from journals identified in database searching. This approach means handsearching is a supplementary method to database searching, since to undertake handsearching—following this method—database searches define the list of journals to handsearch.
Studies included in this review provided empirical evidence that handsearching journals identified by database searching was effective in identifying studies missed by poor indexing, lack of study design or omission of key search terms, or where sections of journals are not indexed on databases. In this way, this approach to selecting journals to handsearch could be categorised as a ‘safety net search’, since it aims to identify studies missed by deficiencies in literature searching and database indexing. This approach to selecting journals to handsearch, even though it is effective, could be argued to be a duplication of effort, since the journals being handsearched have already been ‘searched’ through the bibliographic databases. This is likely why the studies recorded low precision (compared to database searches) and why handsearching takes longer [ 28 ].
The Cochrane Handbook and three studies suggested alternative ways to identify journals to handsearch: namely, selecting journals not indexed on MEDLINE or EMBASE [ 9 , 32 ]—a suggestion that is easily changed to read ‘primary databases’ relevant to the field of study (i.e. ERIC for reviews of educational topics)—and contacting experts, contacting organisations and searches of library shelves [ 30 , 31 ]. Neither the study by Armstrong et al. nor the study by Langham et al. listed the journals identified by method of identification, so it is not clear if there were differences between the list of journals provided by experts when compared to those provided by databases [ 30 , 31 ]. This review did not identify any studies that compared the use of databases to identify journals to handsearch as against these alternative methods but such a study may be of value if efficiencies could be found in practice.
It may be that, in reviews in which a comprehensive identification of studies is required, identifying journals to handsearch should be done both by using databases and contacting experts or organisations. The former being to cover any deficiencies in the database searching and the latter to capture any unique journals or conferences known to experts but not indexed in databases.
Selecting what to handsearch and who should handsearch was another notable difference between the handbooks and studies. The studies included in this review identified studies uniquely from handsearching various sections of journals (from abstracts through to book reviews), and the studies used volunteers, provided training to handsearchers, and used experienced handsearchers to handsearch, with varying degrees of success and failure since handsearching relates to effectively identifying studies when compared to database searching. The Cochrane Collaboration arguably has one of the longest track-records of handsearching projects (c/f [ 37 ]), and it is their recommendation that handsearching is the page-by-page examination of the entire contents of a journal [ 9 , 10 ] by a well-trained handsearcher [ 9 ]. Handsearching is commonly referred to and used as a ‘gold standard’ comparator to establish effectiveness of other search methods. Given that every study included in this review uniquely identified studies by handsearching but also missed studies by handsearching too, a reminder of what constitutes handsearching is likely warranted.
The handbooks provide guidance on where to search and the studies focused on the effectiveness of study identification in selected registers and/or the practicalities of searching registers. In this way, the studies advance the guidance of the handbooks, since they provide empirically derived case-studies of searching the registers. The implications for searching, however, are clear: searching trial registers should still be undertaken in combination with bibliographic database searching [ 38 , 57 ]. Even despite the aims of the International Committee of Medical Journal Editors [ 58 ], comprehensive and prospective registration of trials—and keeping the trial data up to date—is still not common place. It is unclear what pressure (if any) is put upon trial managers who do not prospectively register their trials and, in fact, if there is any active penalty if trial managers do not do so. Until this issue is resolved, the comprehension of registers will remain uncertain and a combination of bibliographic database searching (to identify published trials) and searches of trial registers (to identify recruiting, on-going or completed trials) is required.
The advantages of searching trial registers are worthy of discussion. Registered trials include an e-mail address for trial managers, which can facilitate author contact, and the studies concluded that more consistent searching of trial registers may improve identification of publication and outcome reporting bias [ 40 , 59 ]. If trial managers were using the portals correctly, it would also be a practical method of reporting results and sharing study data, perhaps akin to a ‘project website’, as recommend in the Cochrane Handbook [ 9 ]. The variability of the search interfaces is notably a disadvantage and something upon which could be improved. Glanville et al. observed that the search interfaces lag behind major bibliographic databases [ 38 ]. If the registers themselves are hard to search (and in some cases impossible to export data from), they are less likely to be searched. Trial managers and information specialists/researchers could usefully work together with the registers to develop the interfaces in order to meet the needs of all who use them. The use of trial registers may be broader than only researchers [ 60 ].
In their 2001 study, Eysenbach et al. stated that the role of the internet for identifying studies for systematic reviews is less clear when compared to other methods of study identification [ 45 ]. The handbooks do not update this view, and very few studies were identified in this review which improve upon Eysenbach et al.’s claim. The studies have attempted to take on Eysenbach et al.’s suggestion that a systematic investigation to evaluate the usefulness of the internet for locating evidence is needed. Mahood et al., however, had to abandon their attempts to web-search [ 44 ], but Godin et al. took this work a little further in their case study with reference to identifying grey literature [ 43 ].
The comparative lack of guidance in the handbooks could stem either from a lack of certain knowledge of how to web-search or perhaps a lack of certainty of how to do this systematically, such that web searching could be replicable, and therefore, be included as a method to identify studies without introducing bias. Researchers are exploring the idea of how far web searching can meet the need to be replicable and transparent but still functional [ 41 ]. Further guidance is undoubtedly needed on this supplementary search method.
The date range and age of the handbooks and studies included in this review could be considered a limitation of this study.
Comparative and non-comparative case studies form the evidence base for this study. The studies included in this review have been taken at face-value, and no formal quality appraisal has been undertaken since no suitable tool exists. Furthermore, supplementary search methods are typically evaluated in the context of effectiveness, which is potentially a limited test of the contribution they may offer in the process of study identification. Different thresholds of effectiveness and efficiency may apply in the use of supplementary search methods in systematic reviews of qualitative studies when compared to reviews of RCTs, for example.
The studies themselves do not necessarily correlate to the concepts of claimed advantages and disadvantages. In most cases, proposed advantages and disadvantages have not been tested in practice.
Whilst we have aimed to comprehensively identify and review studies for inclusion, the use of supplementary search methods is a broad field of study and it is possible that some completed studies may have been inadvertently missed or overlooked. It is possible that standard systematic review techniques, such as double-screening, would have minimised this risk, but we are confident that, whilst a more systematic approach may have improved the rigour of the study, it is unlikely to alter the conclusions below.
Current supplementary search practice aligns methodologically with recommended best practice. The search methods as recommended in the handbooks are perceptibly the same methods as used in the studies identified in this review. The difference between the handbooks and the studies is of purpose: the studies sought to test the search methods or tools used to undertake the search methods.
The causal inference between methods (as presented in the handbooks) and results (as found in the studies) could be usefully tested to develop our understanding of these supplementary search methods. Further research is needed to better understand these search methods. Specifically, consistency in measuring outcomes, so the results can be generalised and trends identified, which would provide a link not only to better effectiveness data but also to efficiency data, offers researchers a better understanding of the value of using these search methods, or not.
All of the studies discussed in this review claimed to identify additional includable material for their reviews using supplementary search methods that would have been missed using database searches alone. Few of the studies, however, reported the resources required to identify these unique studies. Further, none of the studies used a common framework or provided information that allows a common metric to be calculated. It is not, therefore, possible to compare the resources required to identify any extra study with each search method. This, alongside the use of comparative and non-comparative case studies as the primary study design to test effectiveness, limits our ability to generalise the results of the studies and so reliably interpret the broader efficiency of these search methods. Researchers could usefully consider reporting the amount of time taken to undertake each search method in their search reporting [ 28 , 61 ].
Identifying unique studies is commonly interpreted as adding value to the review and the process of searching in and of itself. Only three studies sought to extend this, appraising either the quality of the studies identified or the contribution of the studies to the synthesis as a way of considering the value of the additional studies [ 3 , 16 , 45 ]. In reviews of effectiveness, where all studies should be identified so as to generate a reliable estimate of effect, study value might be a moot point but, in resource-limited situations, or for reviews where a comprehensive identification of studies is less important, study value is an important metric in understanding the contribution of supplementary search methods and the extent to which researchers invest time in undertaking them.
Comparing the time taken to search, with a summary estimate of the contribution or value of the studies identified uniquely, against the total number of studies identified, could alter how researchers value supplementary searches. It would permit some basic form of retrospective cost-effectiveness analysis, which would ultimately move literature searching beyond simply claiming that more studies were identified to explaining what studies were identified, at what cost and to what value.
CC is grateful to Danica Cooper for her proof-reading and comments. CC is grateful for feedback from Jo Varley-Campbell and the EST and IS team at Exeter Medical School: Jo Thompson-Coon, Rebecca Abbott, Rebecca Whear, Morwenna Rogers, Alison Bethel, Simon Briscoe and Sophie Robinson. CC is grateful to Juan Talens-Bou and Jenny Lowe for their assistance in full-text retrieval.
CC is grateful to Chris Hyde for his help in stimulating the development of this study and his on-going guidance.
This work was funded as part of a PenTAG NIHR Health Technology Assessment Grant.
Abbreviations.
RCT | Randomised controlled trial |
CC conceived, designed and undertook the study as a part of his PhD. AB, NB and RG provided comments on, and discussed, the study in draft as part of CC’s PhD supervision. All authors have approved this manuscript prior to submission.
CC is a p/t PhD student exploring the use of tailored literature searches in complex systematic reviews. This publication forms a part of his PhD thesis.
Consent for publication, competing interests.
AB and RG are associate editors of systematic reviews.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Eysenbach et al. recommend Alta Vista but this search engine no longer exists.
Chris Cooper, Email: [email protected] .
Andrew Booth, Email: [email protected] .
Nicky Britten, Email: [email protected] .
Ruth Garside, Email: [email protected] .
Advertisement
3534 Accesses
28 Citations
13 Altmetric
Explore all metrics
Meditation has become a cultural phenomenon, and modern scientific research on the topic has exploded. Thousands of scientific articles report various benefits of meditation including clinical, physiological and well-being outcomes. Despite these benefits, drop-out rates in mindfulness-based interventions remain a problem and little work has studied the drawbacks of meditation. Reports of adverse reactions to meditation have emerged and critical voices have begun advocating caution, rather than enthusiasm, for meditation training. Furthermore, the experiences of meditators outside interventions and conventional lab studies are not well understood. Here we develop an empirical codebook of and framework for meditation benefits and drawbacks (MBDs), discussing the actionable implications for meditation training in the real world. These data reveal the major drawbacks hindering real-world meditators, including several less intuitive drawbacks. We also report the major benefits of meditation and generate a structural framework from which they can be understood in parallel with drawbacks. We investigate whether meditation styles affect MBDs and report comparisons between current and former meditators. These results bring cogent structure to the variety of meditation outcomes laypeople experience. As the number of meditators continues to increase, we need such structures to inform how meditation is taught, ensuring ethically informed consent and optimizing practice-fit. Mixed-methods research such as this study allows a greater practical understanding of how meditation is experienced in situ. This work complements the literature on the clinical benefits and neurophysiological mechanisms of meditation and this framework will inform clinicians, researchers and meditation teachers as best practices are reviewed in the coming years.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Anderson, T. (2017). Reddit user ‘oredna’, Thomas Anderson, PhD student. Retrieved 10 September 2018, from https://www.reddit.com/user/oredna/
Anderson, T., & Farb, N. A. S. (2018). Personalising practice using preferences for meditation anchor modality. Frontiers in Psychology, 9 . https://doi.org/10.3389/fpsyg.2018.02521 .
Anderson, T., Suresh, M., & Farb, N. (2016). Meditation benefits and drawbacks: empirical codebook and implications for teaching. Retrieved 8 September 2018, from https://osf.io/b2hxu/
Brown, K. W., Ryan, R. M., & Creswell, J. D. (2007). Mindfulness: theoretical foundations and evidence for its salutary effects. Psychological Inquiry, 18 (4), 211–237. https://doi.org/10.1080/10478400701598298 .
Article Google Scholar
Carver, C. S., & White, T. L. (1994). Behavioral inhibition, behavioral activation, and affective responses to impending reward and punishment: the BIS/BAS scales. Journal of Personality and Social Psychology, 67 , 319–333.
Castillo, R. J. (1990). Depersonalization and meditation. Psychiatry, 53 (2), 158–168.
Article PubMed Google Scholar
Cebolla, A., Demarzo, M., Martins, P., Soler, J., & Garcia-Campayo, J. (2017). Unwanted effects: is there a negative side of meditation? A multicentre survey. PLoS One, 12 (9), e0183137. https://doi.org/10.1371/journal.pone.0183137 .
Article PubMed PubMed Central Google Scholar
Compson, J. (2018). Adverse meditation experiences: navigating Buddhist and secular frameworks for addressing them. Mindfulness, 9 (5), 1358–1369. https://doi.org/10.1007/s12671-017-0878-8 .
Corbin, J. M. (2017). Grounded theory. The Journal of Positive Psychology, 12 (3), 301–302. https://doi.org/10.1080/17439760.2016.1262614 .
Corbin, J. M., & Strauss, A. (1990). Grounded theory research: procedures, canons, and evaluative criteria. Qualitative Sociology, 13 (1), 3–21. https://doi.org/10.1007/BF00988593 .
Couper, M. (2002). Safety of medicines—a guide to detecting and reporting adverse drug reactions—why health professionals need to take action. Retrieved 23 January 2018, from http://apps.who.int/medicinedocs/en/d/Jh2992e/2.html
Crane, C., & Williams, J. M. G. (2010). Factors associated with attrition from mindfulness-based cognitive therapy in patients with a history of suicidal depression. Mindfulness, 1 (1), 10–20 https://doi.org/10.1007/s12671-010-0003-8 .
Crescentini, C., Fabbro, F., & Tomasino, B. (2017). Editorial special topic: enhancing brain and cognition through meditation. Journal of Cognitive Enhancement, 1 (2), 81–83. https://doi.org/10.1007/s41465-017-0033-4 .
Davidson, R. J., & Kaszniak, A. W. (2015). Conceptual and methodological issues in research on mindfulness and meditation. American Psychologist, 70 (7), 581–592.
Dobkin, P. L., Irving, J. A., & Amar, S. (2012). For whom may participation in a mindfulness-based stress reduction program be contraindicated? Mindfulness, 3 (1), 44–50. https://doi.org/10.1007/s12671-011-0079-9 .
Dresler, M., Sandberg, A., Ohla, K., Bublitz, C., Trenado, C., Mroczko-Wąsowicz, A., et al. (2013). Non-pharmacological cognitive enhancement. Neuropharmacology, 64 , 529–543. https://doi.org/10.1016/j.neuropharm.2012.07.002 .
Eberth, J., & Sedlmeier, P. (2012). The effects of mindfulness meditation: a meta-analysis. Mindfulness, 3 (3), 174–189. https://doi.org/10.1007/s12671-012-0101-x .
Elliot, A. J., & Church, M. A. (1997). A hierarchical model of approach and avoidance achievement motivation. Journal of Personality and Social Psychology, 72 (1), 218–232. https://doi.org/10.1037/0022-3514.72.1.218 .
Farb, N., Anderson, T., & Segal, Z. V. (2018). Contemplative practice, well being, and social function. Retrieved 18 June 2018, from https://osf.io/w2f86/
Farias, M., & Wikholm, C. (2016). Has the science of mindfulness lost its mind? BJPsych Bulletin, 40 (6), 329–332. https://doi.org/10.1192/pb.bp.116.053686 .
Farias, M., Wikholm, C., & Delmonte, R. (2016). What is mindfulness-based therapy good for? The Lancet Psychiatry, 3 (11), 1012–1013. https://doi.org/10.1016/S2215-0366(16)30211-5 .
Fenwick, P. (1983). Can we still recommend meditation? British Medical Journal (Clinical Research Ed.), 287 (6403), 1401.
Fortier, M. S., Hogg, W., O’Sullivan, T. L., Blanchard, C., Reid, R. D., Sigal, R. J., et al. (2007). The physical activity counselling (PAC) randomized controlled trial: rationale, methods, and interventions. Applied Physiology, Nutrition, and Metabolism = Physiologie Appliquee, Nutrition Et Metabolisme, 32 (6), 1170–1185. https://doi.org/10.1139/H07-075 .
Gosling, S. D., Rentfrow, P. J., & Swann, W. B. (2003). A very brief measure of the big-five personality domains. Journal of Research in Personality, 37 (6), 504–528. https://doi.org/10.1016/S0092-6566(03)00046-1 .
Goyal, M., Singh, S., Sibinga, E. M. S., Gould, N. F., Rowland-Seymour, A., Sharma, R., et al. (2014). Meditation programs for psychological stress and well-being: a systematic review and meta-analysis. JAMA Internal Medicine, 174 (3), 357–368. https://doi.org/10.1001/jamainternmed.2013.13018 .
Grossman, P., Niemann, L., Schmidt, S., & Walach, H. (2004). Mindfulness-based stress reduction and health benefits: a meta-analysis. Journal of Psychosomatic Research, 57 (1), 35–43. https://doi.org/10.1016/S0022-3999(03)00573-7 .
Jensen, C. G., Vangkilde, S., Frokjaer, V., & Hasselbalch, S. G. (2012). Mindfulness training affects attention—or is it attentional effort? Journal of Experimental Psychology: General, 141 (1), 106–123. https://doi.org/10.1037/a0024931 .
Jha, A. P., Krompinger, J., & Baime, M. J. (2007). Mindfulness training modifies subsystems of attention. Cognitive, Affective, & Behavioral Neuroscience, 7 (2), 109–119.
Josipovic, Z., & Baars, B. J. (2015). Editorial: what can neuroscience learn from contemplative practices? Frontiers in Psychology, 6 . https://doi.org/10.3389/fpsyg.2015.01731 .
Kabat-Zinn, J. (2013). Full catastrophe living , Revised Edition : How to cope with stress, pain and illness using mindfulness meditation . Little, Brown Book Group.
Khoury, B., Lecomte, T., Fortin, G., Masse, M., Therien, P., Bouchard, V., et al. (2013). Mindfulness-based therapy: a comprehensive meta-analysis. Clinical Psychology Review, 33 (6), 763–771. https://doi.org/10.1016/j.cpr.2013.05.005 .
Kuijpers, H. J. H., van der Heijden, F. M. M. A., Tuinier, S., & Verhoeven, W. M. A. (2007). Meditation-induced psychosis. Psychopathology, 40 (6), 461–464. https://doi.org/10.1159/000108125 .
Lindahl, J. R., Fisher, N. E., Cooper, D. J., Rosen, R. K., & Britton, W. B. (2017). The varieties of contemplative experience: a mixed-methods study of meditation-related challenges in Western Buddhists. PLoS One, 12 (5), e0176239. https://doi.org/10.1371/journal.pone.0176239 .
Lustyk, M. K. B., Chawla, N., Nolan, R. S., & Marlatt, G. A. (2009). Mindfulness meditation research: issues of participant screening, safety procedures, and researcher training. Advances in Mind-Body Medicine, 24 (1), 20–30.
PubMed Google Scholar
Matejka, J., Glueck, M., Grossman, T., & Fitzmaurice, G. (2016). The effect of visual appearance on the performance of continuous sliders and visual analogue scales. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5421–5432). New York: ACM. https://doi.org/10.1145/2858036.2858063
Mesmer-Magnus, J., Manapragada, A., Viswesvaran, C., & Allen, J. W. (2017). Trait mindfulness at work: a meta-analysis of the personal and professional correlates of trait mindfulness. Human Performance, 0 (0), 1–20. https://doi.org/10.1080/08959285.2017.1307842 .
Mrazek, M. D., Franklin, M. S., Phillips, D. T., Baird, B., & Schooler, J. W. (2013). Mindfulness training improves working memory capacity and GRE performance while reducing mind wandering. Psychological Science, 24 (5), 776–781. https://doi.org/10.1177/0956797612459659 .
Ng, B. Y. (1999). Qigong-induced mental disorders: a review. The Australian and New Zealand Journal of Psychiatry, 33 (2), 197–206. https://doi.org/10.1046/j.1440-1614.1999.00536.x .
NRS. (2018). Social Grade | National Readership Survey. Retrieved 9 September 2018, from http://www.nrs.co.uk/nrs-print/lifestyle-and-classification-data/social-grade/
Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology, 70 , 153–163. https://doi.org/10.1016/j.jesp.2017.01.006 .
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. The American Psychologist, 55 (1), 68–78.
Sattelberg, W. (2018). The demographics of Reddit: who uses the site? Retrieved 22 August 2018, from https://www.techjunkie.com/demographics-reddit/
Shapiro, D. H. (1992). Adverse effects of meditation: a preliminary investigation of long-term meditators. International Journal of Psychosomatics: Official Publication of the International Psychosomatics Institute, 39 (1–4), 62–67.
Google Scholar
Smallwood, J., & Schooler, J. (2006). The restless mind. Psychological Bulletin, 132 (6), 946–958. https://doi.org/10.1037/0033-2909.132.6.946 .
Stol, K., Ralph, D. P., & FitzGerald, B. (2016). Grounded theory in software engineering research: a critical review and guidelines. https://doi.org/10.1145/2884781.2884833
Stürmer, S., Snyder, M., & Omoto, A. M. (2005). Prosocial emotions and helping: the moderating role of group membership. Journal of Personality and Social Psychology, 88 (3), 532–546. https://doi.org/10.1037/0022-3514.88.3.532 .
Tanaka, K. K. (2007). The individual in relation to the Sangha in American Buddhism: an examination of ‘“privatized religion”’. Buddhist-Christian Studies, 27 (1), 115–127. https://doi.org/10.1353/bcs.2007.0028 .
Tang, Y.-Y., Ma, Y., Wang, J., Fan, Y., Feng, S., Lu, Q., et al. (2007). Short-term meditation training improves attention and self-regulation. Proceedings of the National Academy of Sciences of the United States of America, 104 (43), 17152–17156. https://doi.org/10.1073/pnas.0707678104 .
Van Dam, N. T., van Vugt, M. K., Vago, D. R., Schmalzl, L., Saron, C. D., Olendzki, A., et al. (2018). Mind the hype: a critical evaluation and prescriptive agenda for research on mindfulness and meditation. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 13 (1), 36–61. https://doi.org/10.1177/1745691617709589 .
West, M. (1979). Meditation. The British Journal of Psychiatry, 135 (5), 457–467. https://doi.org/10.1192/bjp.135.5.457 .
Wong, S. Y. S., Chan, J. Y. C., Zhang, D., Lee, E. K. P., & Tsoi, K. K. F. (2018). The safety of mindfulness-based interventions: a systematic review of randomized controlled trials. Mindfulness, 9 (5), 1344–1357. https://doi.org/10.1007/s12671-018-0897-0 .
Zeidan, F., Johnson, S. K., Diamond, B. J., David, Z., & Goolkasian, P. (2010). Mindfulness meditation improves cognition: evidence of brief mental training. Consciousness and Cognition, 19 (2), 597–605. https://doi.org/10.1016/j.concog.2010.03.014 .
Download references
Authors and affiliations.
Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON, L5L 1C6, Canada
Thomas Anderson, Mallika Suresh & Norman AS Farb
You can also search for this author in PubMed Google Scholar
Correspondence to Thomas Anderson .
All research was conducted under informed consent in accord with the Declaration of Helsinki. Participated volunteered their responses without financial compensation.
The authors declare that there is no conflict of interest.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Reprints and permissions
Anderson, T., Suresh, M. & Farb, N.A. Meditation Benefits and Drawbacks: Empirical Codebook and Implications for Teaching. J Cogn Enhanc 3 , 207–220 (2019). https://doi.org/10.1007/s41465-018-00119-y
Download citation
Received : 10 September 2018
Accepted : 21 December 2018
Published : 14 January 2019
Issue Date : 15 June 2019
DOI : https://doi.org/10.1007/s41465-018-00119-y
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Discover the world's research
It’s Tutorial Thursday! In this series, we explore APA’s library of video tutorials available on the APA Publishing Training YouTube channel . Please feel free to link to or embed our videos in your library websites or LibGuides, course management systems, or other locations where students, faculty, and researchers will find them.
The first video, Finding High Quality Original Research in APA PsycInfo , is an introduction for all platforms that explains what an empirical study is and what it means for a study to be published in a peer-reviewed journal.
The second video demonstrates how to quickly and easily find peer-reviewed empirical articles using the filters in APA PsycInfo.
Select the video for the platform or website that hosts APA PsycInfo at your institution.
By the end of this 2-part tutorial you will be able to:
Your email address will not be published. Required fields are marked *
1. introduction.
1.1 Bharat, as one of the world’s largest economies, is progressing towards major energy transformation. The Government of India has introduced several significant initiatives to promote solar energy as part of its broader renewable energy and sustainability efforts. The National Solar Mission (NSM), launched in 2010, aims to achieve large-scale solar deployment with a target of 100 GW by 2022. The (PM KUSUM) Pradhan Mantri Kisan Urja Suraksha evam Utthaan Mahabhiyaan scheme was introduced in 2019 to support farmers in installing solar pumps and power plants, while the Rooftop Solar Programme sets a goal of 40 GW in rooftop solar capacity. Solar Park Development focuses on large-scale projects with an additional 40 GW capacity. India, as a co-founder of the International Solar Alliance (ISA), leads global efforts in solar energy promotion. The Pradhan Mantri Janjati Adivasi Nyaya Maha Abhiyan (PM JANMAN) emphasizes solar energy for tribal communities, while the PLI Scheme boosts domestic solar manufacturing. Collectively, these initiatives contribute to India's renewable energy targets of 175 GW by 2022 and 450 GW by 2030.
1.2 The nation’s rapidly growing population and increasing industrialisation have led to a surge in energy needs over the past half-century. Traditional fossil fuels, which are both limited and harmful to the environment, are no longer a viable option. Given these challenges, solar energy emerges as a beacon of hope. India’s geographical advantage of abundant sunlight makes solar energy not just a promising option, but a potential game-changer. Empirical research on solar energy and sustainability can identify optimal strategies for transitioning to solar power and reducing the carbon footprint, which will facilitate combating climate change. While solar technology is making massive advances, efficiency, storage, and cost pose a significant challenge. To address these challenges, empirical research can lead to advances by testing new materials, increasing photovoltaic cell efficiency, and improving storage costs. Innovation in these areas is absolutely essential for making solar energy more efficient and attractive for widespread use.
1.3 The solar energy sector holds significant potential for job creation in areas such as manufacturing, installation, maintenance, and research. Empirical studies can map the job creation landscape, identify skill gaps, and suggest educational and training programs to prepare the workforce for the emerging green economy.
1.4 Furthermore, the economic impacts of solar energy investments can attract further investments, both domestic and international, thereby bolstering India’s economy. Empirical research can also address issues emerging from India's heavy reliance on imported fossil fuels. To do so, it can identify practical ways to harness domestic solar energy, enhancing energy security and independence. Furthermore, the research can offer decentralised solar solutions to alleviate energy needs in rural and remote areas. Such an approach will reduce regional economic inequalities. A combination of traditional knowledge and modern technologies supported by robust empirical data can support sustainable and culturally appropriate energy solutions, paving the way for India's brighter and more sustainable future.
1.5 The researchers' role in conducting empirical research is pivotal for effective policymaking for sustainable energy development. By providing robust empirical evidence, researchers play a crucial role in offering insights into approaches to integrate solar power into the national grid, design effective subsidy frameworks, and create incentives for solar energy producers and consumers. Understanding the socio-economic impacts of solar energy adoption allows for policies that maximise benefits while minimizing potential drawbacks. This data-driven approach ensures that policies are technically sound, socially equitable, and economically beneficial.
1.6 In this context, the ICSSR has identified Solar Energy and Sustainability as a key area for empirical research, with the goal of generating outcomes and insights on various aspects of solar energy transformation in both urban and rural sectors.
i. Proposal Requirements:
1.7 Proposals must ensure a significant sample size for research to objectively assess the various dimensions of Solar energy and sustainability issues. Detailed geographical coverage (villages, blocks, and districts) should be shown using charts and GIS-based maps to illustrate project locations.
1.8 Proposals should encompass fact-based and action-oriented research, including a thorough background of the specific research area and a relevant literature review leading to well-defined hypotheses, objectives, and research questions. The proposal must outline a systematic research methodology, specifying sampling methods, data sources, interview schedules/questionnaires, and tools and software for analysis. The study should address quantitative and qualitative approaches, focusing on actionable and applied analysis, and include specific scope and limitations with mitigation strategies.
ii. Research Team Composition:
1.9 The research team should include three to six scholars to develop a comprehensive study, emphasising collaboration among multiple institutions. Researchers from various disciplines, institutions, and regions are encouraged to collaborate on this research.
1.10 The Project Director will be responsible for completing the study successfully and utilising the funds. The Project Director and Co-Project Directors should be from the social and human sciences.
iii. Duration, Budget and Disbursal of Grant
1.11 The project duration will be 10 to 12 months, with a budget of up to Rs. 15 Lakhs. The grant will be disbursed in instalments as deemed fit by the ICSSR. The affiliating institution should open/maintain a dedicated bank account for the ICSSR grant (Scheme Code-0877) that is duly registered at the EAT Module of the PFMS portal.
iv. Methodological Approaches:
1.12 A multi-method approach will be employed to provide a comprehensive analysis. Quantitative studies will deploy statistical methods to analyse existing large-scale datasets (or the data gathered by the researchers) on solar energy production, consumption, and impacts that will offer insights into overall trends. To complement the quantitative exercise, qualitative research will be conducted through in-depth interviews, focus-group discussions, and case studies to capture the nuanced experiences and perspectives of the stakeholders, ranging from policymakers to local users. Additionally, a comparative regional analysis will assess the performance and challenges of solar energy implementation across various states, identifying region-specific solutions and best practices tailored to diverse environmental and socio-economic contexts.
2.1 Researchers who are permanently employed or retired as faculty from UGC Recognized Indian Universities/Deemed to be Universities/ Affiliated Colleges/Institutions under (2) F or 12(B), ICSSR Research Institutes, ICSSR Recognised Institutes and Institutes of National Importance as defined by the Ministry of Education (MoE) are eligible to apply. The applicants should have substantial research experience, which is demonstrable through publications of books/research papers/reports. The Project Director and Co-Project Director must also hold a Ph.D. Degree.
2.2 In exceptional cases, Independent researchers with PhD degrees who are not affiliated permanently with any institution mentioned in Clause 2.1 but have produced at least two sole-author books published by reputed publishers and/or 05 articles in peer-reviewed journals can also be considered as Co-Project Directors. Such scholars will be required to collaborate with a faculty from institutions given in 2.1 above.
2.3 Further, those researchers with PhD degrees who are in contractual appointment in academic/research institutions mentioned in Clause 2.1 and have produced at least two sole-author books published by reputed publishers and/or 05 articles in peer-reviewed journals may also apply as Co-Project Directors. In the event of their contract expiry, they may continue as Co-Project Directors until the completion of the project.
2.4 Senior and retired government and defence officers (having not less than 05 years of regular service) possessing a Ph. D. degree in any social science discipline and having produced at least two sole-author books published by reputed publishers and/or 05 articles in peer-reviewed journals can also apply as Co-Project Directors, in collaboration with a faculty from institutions given in 2.1 above.
2.5 Non-academic participants/stakeholders/local community may also be part of the research team in the capacity of key informants.
3.1 The applications shall will be invited through an advertisement on ICSSR website and shall will be promoted through social media platforms of ICSSR.
3.2 The applicants must submit an online application along with the research proposal, annexures, and other required documents in the prescribed format duly forwarded by the Competent Authorities of the affiliated university/college/institute. Hard copies of the same must be submitted within ten days of the last date of submission of the online application. The online application form will be available on the ICSSR website from September 13th, 2024. The last date for online submission is October 13th, 2024.
3.3 Research proposals and final reports should be in English or Hindi. The application form should be filled out in Hindi using Arial/ Mangal Unicode (Devanagari) font.
3.4 Researchers can apply for only one project at a time. For any ongoing or completed project with the ICSSR, the cooling-off period for applying to another project will be one year for the Project Director. The date of cooling period will be calculated from the date of submission of the final report. However, this will not be applicable for minor projects/short-term empirical research projects of duration equal to or less than 12 months. For ICSSR Research Institutes, the cooling period will not be applicable.
4.1 The procedure for awarding the projects will be in multiple phases before the declaration of final results. All applications submitted to the ICSSR will be screened and evaluated by the expert committee following a blind review process. Shortlisted applicants will be invited to interact/present at ICSSR (in person or online).
4.2 The expert committee(s) shall make a recommendation(s) for awarding studies and suggest the budget for the proposed studies after interacting with the shortlisted applicants.
4.3 The merit list of selected candidates for Projects will be published on the ICSSR website.
4.4 Only the selected candidates and their affiliating universities shall be informed individually through a provisional award letter clearly specifying the formalities and documents required for joining the Project.
5.1 The amount will be disbursed in instalments, depending on the funds, phases and duration of the study, as indicated in the Award Letter. ICSSR reserves the right, based on expert opinion, to make changes in the research design, budget and duration of the project.
5.2 The detailed budget estimates and the proportionate Heads of Expenditure for these proposals are to be prepared by the Institute / Project Director/group of scholars.
i. Allocation of Heads of Expenditure
5.3 A. The remuneration for the Research Staff must be according to the ICSSR guidelines.
B. The proportionate allocation of expenditure for the budget heads such as Fieldwork (Travel / Logistics / Boarding, Survey Preparation or Consultancy, etc.); Equipment and Study material (Computer, Printer, Source Material, Books, Journals, Software, Data Sets, workshop/seminar/publication etc.); and Contingency charges etc. shall be as per the ICSSR guidelines given below;
| ||
|
|
|
1. | Research Staff: Full-time/part-time/ Hired services | Not exceeding 45% of the total budget |
2. | Fieldwork | Not exceeding 35% |
3. | Research Equipment and study material (Computer, Printer, etc.) | Not exceeding 10% |
4. | Contingency | Not exceeding 5% |
5. | Workshop/ Seminar/Publication *The ICSSR will decide on this depending on the project's requirements. | Approx. 5% |
|
| 100% |
* The project investigator may with the permission of the institution the re-appropriate expenditure from one sub-head to another subject to a maximum of 10% of the particular budget heads. If the study necessitates re-appropriation beyond 10% it may be done only after the approval of the ICSSR.
C. Affiliating Institutional Overhead Charges @ 10% over and above on the awarded grant of the project, subject to a maximum limit of Rs. 2 00,000/-, will be released by the ICSSR after successfully completing the project.
5.4. Project Staff shall be engaged/appointed per the rules by the affiliating institution of the Project Director on a full/ part-time basis during the research work. The project director may decide the duration. The consolidated monthly remuneration/emoluments of the project staff must be according to the following guidelines:
|
|
|
Research Associate | Rs. 47, 000/- | Postgraduate degree in a social science discipline (55% minimum) with NET /M.Phil. / Ph.D. and two years of research experience as a Research Assistant in any Project. |
Research Assistant | Rs. 37, 000/- | Postgraduate degree in a social science discipline (55% minimum) with NET /M.Phil. / Ph.D. |
Field Investigator | Rs. 20, 000/- | Postgraduate degree in a social science discipline with a minimum of 55% marks. |
5.5 Selection of Research Staff should be made through an advertisement published on the respective institute’s website and a selection committee, duly approved by the Competent Authority of the institution, consisting of (1) Project Director; (2) One external subject Expert (from outside the institute where the project is located); (3) Dean of the faculty in case of University /Principal in case of College and (4) Head of the Department of the Project Director.
5.6 The rules of affiliating institutes/universities shall be applied to all field work-related expenses of the project director, co-project director (s), and project personnel.
5.7 All equipment and books purchased out of the project fund shall be the property of the affiliating institution. A detailed stock report duly signed by the Head of the Institute / Registrar / Principal must be submitted to the ICSSR. However, ICSSR may request books and/ or equipment if it so requires.
6.1 The Project Director has to join the project as per the date notified by the ICSSR by submitting the requisite documents, such as an ‘undertaking’ on a Rs.100 stamp paper duly verified by a notary, declaration in prescribed format on a Rs.100 stamp paper duly verified by a notary, Grant-in-Aid bill towards the first instalment on or before the given deadline and Registration Mandate Form of PFMS Account of those affiliating/administering institutions, which have not linked their accounts to PFMS for ICSSR grant. The joining period can be extended only in exceptional circumstances up to a maximum of three months by the ICSSR.
6.2 The total grant awarded for the Research Project will be released to the affiliating institution in instalments, as mentioned in the award letter. The ICSSR will decide in accordance with the overall project requirements.
6.3 The final instalment will be released after receipt of recommendations of the expert for acceptance of the Final Report: Audited statement of accounts (AC) in prescribed format with utilization certificate (UC) in form 12A of GFR for the entire approved project amount duly signed by the Finance Officer/Registrar/Director of the affiliating Institution; and at least five published research papers in the UGC care and Scopus Indexed journals. A detailed stock report duly signed by the Head of the Institute / Registrar / Principal has to be submitted to the ICSSR. The Finance Officer and charted accountant will sign the utilisation certificate of institutions whose accounts are not audited by CAG/AG.
6.4 The Overhead Charges to the affiliating institution will be released after the acceptance of the Final Report, along with the receipt of the final audited Statement of Accounts and Utilisation Certificate in prescribed formats, which the ICSSR shall verify.
6.5 The Project Director will ensure that their expenditure conforms to the approved budget heads and relevant rules. The Audited Statement of Accounts with Utilization Certificate in Form 12A of GFR is mandatory for the entire approved amount for the project.
7.1 Research undertaken by a Project Director will be monitored by submitting periodic progress reports in the prescribed format. The project may be discontinued/terminated if research progress is unsatisfactory or any ICSSR rules are violated. In such cases, the entire amount must be refunded with a 10% penal interest.
7.2 The scholar/awardee must acknowledge the support of ICSSR in all their publications resulting from the project output, such as Research Papers, Journal Articles, and Articles in Edited Books, etc., and they must submit a copy of the same to the ICSSR during the course of or after completion of the project. In case of absence of acknowledgement by the scholars, they will be blacklisted, and they will not be able to apply for any schemes of ICSSR in the future. Papers published in Conference/Seminar proceedings will not be considered as they are not peer-reviewed. However, proceedings published by Scopus-indexed / UGC care-listed journals can be considered.
7.3 All project-related queries will be addressed to the Project Director/ Affiliating Institution for their timely reply.
7.4 The ICSSR may, at any time, ask for verification of accounts and other relevant documents related to the Project.
7.5 The ICSSR reserves the right to change the affiliating institute if it is found that the institute is not cooperating with the scholar and is not facilitating the timely completion of the study.
7.6 The final report submitted by the Project Director is mandatorily evaluated by an Expert appointed by the ICSSR before the release of the final instalment is considered.
7.7 The Project Director shall be personally responsible for the project's timely completion. Any member of the project staff, including the project director, cannot submit the project proposal/final report for the award of any University degree/diploma or funding by any institution. However, ICSSR will have no objection if any project staff member utilises the project data for research purposes, provided there are due acknowledgements to ICSSR.
7.8 If the researchers do not submit the requisite documents and the final report in time or the project is not completed in the stipulated period, the scholars will be blacklisted, and legal recourse will be initiated for recovery of the released grant.
7.9 As per the Ministry of Education's (MoE) directions, the amount of grant sanctioned is to be utilized within the stipulated duration of the project. Any amount of the grant remaining unspent shall be refunded to the ICSSR immediately upon the expiration of the project's duration. Suppose the Project Director fails to utilize the grant for the purpose for which the same has been sanctioned/or fails to submit the audited statement of expenditure within the stipulated period. In that case, he/she will be required to refund the grant amount with interest thereon @ 10% per annum.
8.1 On completion of the study, the Project Director should submit:
A. Two hard copies of the Final Report along with softcopy in both PDF and Word formats;
B. Hard copy of abstract in 500 words along with softcopy in both PDF and word formats;
C. Hard copy of the Executive Summary of the final report in 5000 words along with softcopy in both PDF and Word formats;
D. Similarity index sheet (Plagiarism check) for the final report.
8.2 If the expert suggests any changes in the reports at the time of evaluation, the Project Director shall incorporate the changes within the stipulated time and should submit the following:
A. Soft copy of the modified final report in both PDF and Word formats along with two hard copies;
B. Five copies of the executive summary;
C. Softcopies of (if any) Data Sets, along with well-defined data definitions and other important information for documentation.
8.3 ICSSR checks every report for plagiarism and generates a similarity report. As a policy, ICSSR does not accept reports with a similarity beyond 10 per cent on the similarity index. Scholars must get their final report checked by their affiliating institutions for similarity index and attach a certified report of the same at the time of submission.
8.4 The scholar's final report will be considered satisfactory only after the expert appointed by the ICSSR makes the final recommendation of acceptance.
9.1 The affiliating institution must give an undertaking in the prescribed format contained in the Application Form to administer and manage the ICSSR grant.
9.2 It is also required to provide the requisite research infrastructure to the scholar and maintain proper accounts.
9.3 The affiliating institution should open and maintain a dedicated bank account for the ICSSR grant (Scheme Code-0877) duly registered at the EAT Module of the PFMS portal for the release of the grant without any delay. This account should be retained for all projects awarded by ICSSR.
9.4 The affiliating institution will be under obligation to ensure submission of the final report and an Audited Statement of Accounts and Utilization Certificate (in the prescribed Proforma GFR 12-A) duly certified by the Competent Authority of the institution, including the refund of any unspent balance.
9.5 The affiliating institution shall make suitable arrangements for preserving data relating to the study, such as filled-in schedules, tabulation sheets, manuscripts, reports, etc. The ICSSR reserves the right to demand raw data or such parts of the survey as it deems fit.
9.6 If a Project Director leaves/discontinues the project before completion of the tenure, the affiliating institution shall inform ICSSR immediately and refund the entire amount with a penal interest @ 10% per annum. The unutilised funds pending with the institutions for all projects must be returned to the ICSSR immediately. In case the universities/ institutions do not abide by the rules of the ICSSR, they shall be blacklisted for applying to schemes of ICSSR in the future.
9.7 In case a Project Director passes away before the project's completion, the affiliating institution shall immediately inform ICSSR by submitting a copy of the death certificate and settle the accounts immediately by expediting the refund of any unspent balance.
10.1 The duration of the project includes the time for Final Report writing. In exceptional circumstances, if the ICSSR is satisfied with the progress of the work, including quality publications, an extension may be granted (up to three months for Minor Projects up to six months for Major Projects) without any additional grant. If an extension is needed beyond the above-mentioned period, the matter will be brought up with the competent authority of ICSSR for a decision. If the extension is required, the Project Director must request at least three months before the end of the stipulated tenure for a no-cost extension with a copy of the progress report and reasons for the delay with documentary evidence. Retrospective extension will not be permitted.
10.2 The contingency grant may be utilized for stationery, computer typing-related costs, specialised assistance such as data analysis and consultation for field trips, etc., related to the research work.
10.3 Defaulters of any previous fellowship/project/programme/grant of the ICSSR will not be eligible for consideration. No scholar can participate in a research project or ICSSR fellowship.
10.4 Foreign trips are not permissible within the awarded budget of a project. However, the Project Director may undertake data collection outside India in exceptional cases and if warranted by the needs of the proposal. For this, they must apply separately for consideration under the Data Collection Scheme of the ICSSR International Collaboration Division. However, ICSSR will not be bound to support such data collection from abroad, and the decision of the ICSSR will be final. In either case, the completion of the study should not be consequent upon such data collection support.
10.5 Any request for an additional grant over the sanctioned budget will not be considered.
10.6 The procurement of equipment/assets for the research project is allowed only if it was originally proposed, does not surpass the permissible amount, and adheres to the regulations of the affiliating institution.
10.7 The project director cannot make any changes in the research design at any stage.
10.8 Regarding Transfer of a Project/Appointment of Substitute Project Director:
A. At the request of a university/institute, the ICSSR may permit the appointment of a Substitute Project Director in exceptional circumstances.
B. The ICSSR may also appoint a Substitute Project Director if it is convinced that the project's original awardee will not be able to carry out the study successfully.
C. The ICSSR may transfer the place of the Project from one affiliating institution to another subject to submission of the following:
● Satisfactory progress report (s);
● No objection certificate from both the previous and the new university/institute;
● Audited statement of account, utilisation certificate, and unspent balance, if any.
● However, no transfer of project / Project director should be requested in the last six months of the study.
D. Overhead charges will be apportioned proportionally among the institutes as per the grant released or as may be finally decided by the ICSSR.
E. In case of superannuation of the Project Director and if the institution's rules require, the project transfer to a serving faculty member may be done with prior approval of the ICSSR. The Project's credit shall belong to the original Project Director.
10.9 Consideration under other call(s) would require a fresh proposal.
10.10 The Council reserves the right to reject any application without assigning any reason. It is not responsible for postal delays or losses.
10.11 Incomplete applications will not be considered in any respect.
10.12 The ICSSR has the final authority regarding the interpretation of these guidelines or any other issue.
10.13 The ICSSR will not entertain queries until the final declaration of results against a call. Lobbying for an award will lead to disqualification.
10.14 The ICSSR retains all rights to publish any project funded by it, contingent upon the recommendation by expert(s) appointed by ICSSR for publication. ICSSR shall hold copyright for the final report and outcomes of the project. Any publication or dissemination of research findings shall solely be at the discretion of ICSSR.
IMAGES
VIDEO
COMMENTS
Drawbacks of empirical research. It can be time-consuming depending on the research subject. It is not a cost-effective way of data collection in most cases because of the possible expensive methods of data gathering. Moreover, it may require traveling between multiple locations.
Advantages and Disadvantages of Empirical Research. Advantages. Since the time of the ancient Greeks, empirical research had been providing the world with numerous benefits. The following are a few of them: ... Empirical Research and Writing: A Political Science Student's Practical Guide. Thousand Oaks, CA: Sage, 1-19.
The main strength of using empiricism as a way of finding truth is that rationalism doesn't necessarily account for the way that the world really works, whereas empiricism does. Empiricism is widely used in science as a method of proving and disproving theories. This is backed up by Galileo who stated that beliefs must be tested empirically ...
Disadvantages of Empirical Research. While empirical research has notable strengths, researchers must also be aware of its limitations when deciding on the right research method for their study.4 One significant drawback of empirical research is the risk of oversimplifying complex phenomena, especially when relying solely on quantitative ...
Empirical research is defined as any research where conclusions of the study is strictly drawn from concretely empirical evidence, and therefore "verifiable" evidence. This empirical evidence can be gathered using quantitative market research and qualitative market research methods. For example: A research is being conducted to find out if ...
Disadvantages of Empirical Research. While empirical research brings competency and authenticity, it also has some drawbacks: Time-Consuming Nature: Collecting data from various sources and dealing with numerous parameters can make this research time-consuming requiring patience.
Empirical research methods are used when the researcher needs to gather data analysis on direct, observable, and measurable data. Research findings are a great way to make grounded ideas. Here are some situations when one may need to do empirical research: 1. When quantitative or qualitative data is needed.
In empirical study, conclusions of the study are drawn from concrete empirical evidence. This evidence is also referred to as "verifiable" evidence. This evidence is gathered either through quantitative market research or qualitative market research methods. An example of empirical analysis would be if a researcher was interested in finding ...
A scientist gathering data for her research. Empirical research is research using empirical evidence. It is also a way of gaining knowledge by means of direct and indirect observation or experience. Empiricism values some research more than other kinds. Empirical evidence (the record of one's direct observations or experiences) can be analyzed ...
The term "empirical" entails gathered data based on experience, observations, or experimentation. In empirical research, knowledge is developed from factual experience as opposed to theoretical assumption and usually involved the use of data sources like datasets or fieldwork, but can also be based on observations within a laboratory setting.
The overall objective of this chapter is to introduce empirical research. More specifically, the objectives are: (1) to introduce and discuss a decision-making structure for selecting an appropriate research approach, (2) to compare a selection of the introduced research methodologies and methods, and (3) to discuss how different research methodologies and research methods can be used in ...
A recent empirical study has led to some enlightening possibilities as to how academics perceive the advantages and disadvantages of empirical versus conceptual research, and what strategy the ...
It saves a lot of time. However, there are certain disadvantages too. Empirical studies are lengthy. Depending upon the number of variables and data analysis methods used, primary data analysis cannot be fit in less than 3000 words. Results can be unpredictable.
Empirical research is a type of research methodology that makes use of verifiable evidence in order to arrive at research outcomes. In other words, this type of research relies solely on evidence obtained through observation or scientific data collection methods. Empirical research can be carried out using qualitative or quantitative ...
Page ID. Jenkins-Smith et al. University of Oklahoma via University of Oklahoma Libraries. This book is concerned with the connection between theoretical claims and empirical data. It is about using statistical modeling; in particular, the tool of regression analysis, which is used to develop and refine theories.
3.1 Advantages There are some benefits of using qualitative research approaches and methods. Firstly, qualitative research approach produces the thick (detailed) description of participants' feelings, opinions, and experiences; and interprets the meanings of their actions (Denzin, 1989).
Evidence graduation is geared to the fact that for methodological reasons certain study designs yield results that are more likely to be reliable. This corresponds with the rules of the methodology of empirical research. 4,5 Thus, randomized control-group studies have a higher value than nonrandomized or uncontrolled studies.
Research problems and hypotheses are important means for attaining valuable knowledge. They are pointers or guides to such knowledge, or as formulated by Kerlinger (Citation 1986, p. 19): " … they direct investigation.". There are many kinds of problems and hypotheses, and they may play various roles in knowledge construction.
In-depth interviews are a qualitative research method that follow a ... theoretical research question to a more precise empirical one. ... both advantages and disadvantages 68. For example ...
A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: a methodological review ... disadvantages and resource implications of each search method. The aim of this study is to compare current practice in using supplementary search methods with methodological guidance ...
The idea that clinical practice can be informed by empirical research, however, is not new and has been integral to psychology since the late 19th century, marked by Lightner Witmer's first psychology clinic in 1896 (see McReynolds, 1997). The Boulder Conference in 1949 formalized clinical psychology's commitment to an empirical base with the ...
Meditation has become a cultural phenomenon, and modern scientific research on the topic has exploded. Thousands of scientific articles report various benefits of meditation including clinical, physiological and well-being outcomes. Despite these benefits, drop-out rates in mindfulness-based interventions remain a problem and little work has studied the drawbacks of meditation. Reports of ...
Here we develop an empiri cal codebook of and framework fo r meditation benefits and dr awbacks (MBDs), discussing th e actionable im plications for me ditation trai ning in the real world. Th ese ...
Peer-Reviewed Empirical Articles - Searching APA PsycInfo on EBSCOhost Peer-Reviewed Empirical Articles - Searching APA PsycInfo on Ovid; Peer-Reviewed Empirical Articles - Searching APA PsycInfo on ProQuest; By the end of this 2-part tutorial you will be able to: Explain what it means for an article to be considered an empirical study.
Empirical research can also address issues emerging from India's heavy reliance on imported fossil fuels. To do so, it can identify practical ways to harness domestic solar energy, enhancing energy security and independence. Furthermore, the research can offer decentralised solar solutions to alleviate energy needs in rural and remote areas.