{"id":357153,"date":"2024-10-20T01:15:37","date_gmt":"2024-10-20T01:15:37","guid":{"rendered":"https:\/\/pdfstandards.shop\/product\/uncategorized\/bs-iso-135282015-tc\/"},"modified":"2024-10-26T01:41:56","modified_gmt":"2024-10-26T01:41:56","slug":"bs-iso-135282015-tc","status":"publish","type":"product","link":"https:\/\/pdfstandards.shop\/product\/publishers\/bsi\/bs-iso-135282015-tc\/","title":{"rendered":"BS ISO 13528:2015 – TC"},"content":{"rendered":"
PDF Pages<\/th>\n | PDF Title<\/th>\n<\/tr>\n | ||||||
---|---|---|---|---|---|---|---|
1<\/td>\n | compares BS ISO 13528:2015 <\/td>\n<\/tr>\n | ||||||
2<\/td>\n | TRACKED CHANGES Text example 1 \u2014 indicates added text (in green) <\/td>\n<\/tr>\n | ||||||
169<\/td>\n | Foreword <\/td>\n<\/tr>\n | ||||||
170<\/td>\n | 0\tIntroduction <\/td>\n<\/tr>\n | ||||||
173<\/td>\n | 1\tScope 2\tNormative references 3\tTerms and definitions <\/td>\n<\/tr>\n | ||||||
176<\/td>\n | 4\tGeneral principles 4.1\tGeneral requirements for statistical methods <\/td>\n<\/tr>\n | ||||||
177<\/td>\n | 4.2\tBasic model 4.3\tGeneral approaches for the evaluation of performance <\/td>\n<\/tr>\n | ||||||
178<\/td>\n | 5\tGuidelines for the statistical design of proficiency testing schemes 5.1\tIntroduction to the statistical design of proficiency testing schemes 5.2\tBasis of a statistical design <\/td>\n<\/tr>\n | ||||||
179<\/td>\n | 5.3\tConsiderations for the statistical distribution of results <\/td>\n<\/tr>\n | ||||||
180<\/td>\n | 5.4\tConsiderations for small numbers of participants 5.5\tGuidelines for choosing the reporting format <\/td>\n<\/tr>\n | ||||||
182<\/td>\n | 6\tGuidelines for the initial review of proficiency testing items and results 6.1\tHomogeneity and stability of proficiency test items <\/td>\n<\/tr>\n | ||||||
183<\/td>\n | 6.2\tConsiderations for different measurement methods 6.3\tBlunder removal 6.4\tVisual review of data <\/td>\n<\/tr>\n | ||||||
184<\/td>\n | 6.5\tRobust statistical methods 6.6\tOutlier techniques for individual results <\/td>\n<\/tr>\n | ||||||
185<\/td>\n | 7\tDetermination of the assigned value and its standard uncertainty 7.1\tChoice of method of determining the assigned value <\/td>\n<\/tr>\n | ||||||
186<\/td>\n | 7.2\tDetermining the uncertainty of the assigned value <\/td>\n<\/tr>\n | ||||||
187<\/td>\n | 7.3\tFormulation 7.4\tCertified reference material <\/td>\n<\/tr>\n | ||||||
188<\/td>\n | 7.5\tResults from one laboratory <\/td>\n<\/tr>\n | ||||||
189<\/td>\n | 7.6\tConsensus value from expert laboratories <\/td>\n<\/tr>\n | ||||||
190<\/td>\n | 7.7\tConsensus value from participant results <\/td>\n<\/tr>\n | ||||||
191<\/td>\n | 7.8\tComparison of the assigned value with an independent reference value <\/td>\n<\/tr>\n | ||||||
192<\/td>\n | 8\tDetermination of criteria for evaluation of performance 8.1\tApproaches for determining evaluation criteria 8.2\tBy perception of experts 8.3\tBy experience from previous rounds of a proficiency testing scheme <\/td>\n<\/tr>\n | ||||||
193<\/td>\n | 8.4\tBy use of a general model <\/td>\n<\/tr>\n | ||||||
194<\/td>\n | 8.5\tUsing the repeatability and reproducibility standard deviations from a previous collaborative study of precision of a measurement method 8.6\tFrom data obtained in the same round of a proficiency testing scheme <\/td>\n<\/tr>\n | ||||||
195<\/td>\n | 8.7\tMonitoring interlaboratory agreement 9\tCalculation of performance statistics 9.1\tGeneral considerations for determining performance <\/td>\n<\/tr>\n | ||||||
196<\/td>\n | 9.2\tLimiting the uncertainty of the assigned value <\/td>\n<\/tr>\n | ||||||
197<\/td>\n | 9.3\tEstimates of deviation (measurement error) <\/td>\n<\/tr>\n | ||||||
198<\/td>\n | 9.4\tz scores <\/td>\n<\/tr>\n | ||||||
199<\/td>\n | 9.5\tz\u2032 scores <\/td>\n<\/tr>\n | ||||||
200<\/td>\n | 9.6\tZeta scores (\u03b6) <\/td>\n<\/tr>\n | ||||||
201<\/td>\n | 9.7\tEn scores 9.8\tEvaluation of participant uncertainties in testing <\/td>\n<\/tr>\n | ||||||
202<\/td>\n | 9.9\tCombined performance scores <\/td>\n<\/tr>\n | ||||||
203<\/td>\n | 10\tGraphical methods for describing performance scores 10.1\tApplication of graphical methods 10.2\tHistograms of results or performance scores <\/td>\n<\/tr>\n | ||||||
204<\/td>\n | 10.3\tKernel density plots <\/td>\n<\/tr>\n | ||||||
205<\/td>\n | 10.4\tBar-plots of standardized performance scores 10.5\tYouden Plot <\/td>\n<\/tr>\n | ||||||
206<\/td>\n | 10.6\tPlots of repeatability standard deviations <\/td>\n<\/tr>\n | ||||||
207<\/td>\n | 10.7\tSplit samples <\/td>\n<\/tr>\n | ||||||
208<\/td>\n | 10.8\tGraphical methods for combining performance scores over several rounds of a proficiency testing scheme <\/td>\n<\/tr>\n | ||||||
209<\/td>\n | 11\tDesign and analysis of qualitative proficiency testing schemes (including nominal and ordinal properties) 11.1\tTypes of qualitative data 11.2\tStatistical design <\/td>\n<\/tr>\n | ||||||
210<\/td>\n | 11.3\tAssigned values for qualitative proficiency testing schemes <\/td>\n<\/tr>\n | ||||||
211<\/td>\n | 11.4\tPerformance evaluation and scoring for qualitative proficiency testing schemes <\/td>\n<\/tr>\n | ||||||
214<\/td>\n | Annex\u00a0A (normative) Symbols <\/td>\n<\/tr>\n | ||||||
216<\/td>\n | Annex\u00a0B (normative) Homogeneity and stability of proficiency test items <\/td>\n<\/tr>\n | ||||||
224<\/td>\n | Annex\u00a0C (normative) Robust analysis <\/td>\n<\/tr>\n | ||||||
235<\/td>\n | Annex\u00a0D (informative) Additional Guidance on Statistical Procedures <\/td>\n<\/tr>\n | ||||||
239<\/td>\n | Annex\u00a0E (informative) Illustrative Examples <\/td>\n<\/tr>\n | ||||||
261<\/td>\n | Bibliography <\/td>\n<\/tr>\n<\/table>\n","protected":false},"excerpt":{"rendered":" Tracked Changes. Statistical methods for use in proficiency testing by interlaboratory comparison<\/b><\/p>\n |