[-> use CTRL - mouse wheel to zoom in] coronawarnapp as digital placebo a draft reply to https://twitter.com/rbbinforadio/status/1337049900991836160 1.5 million warnings. this is an estimate, there is no way to verify this data publically, only known are the uploaded diagnostic keys and one can extrapolate these to an average number of "true" contacts. it would be important to know how many red alerts really lead to positive results, technically this is quite feasible without compromising data protection. (similar to qr tokens). the rki claims, however, that due to encryption and decentralization exact performance data would not be available. citing data protection as a reason for not allowing performance monitoring is not a good design pattern. there are studies on the extent to which apps can help contain a pandemic. they help with prevalence, i.e. accuracy of testing below an r0 of 1. 2) 3) even in the current out-of-control infection situation, the app can give local indications of how many risk contacts a user has accumulated. but since the accuracy of the app is currently below 50%, thanks to a high proportion of false negatives, what it conveys is at best dependent on chance, or good faith. (placebo effect) one feels safer, but it is definitely not, because a high proportion of contacts is known to the system, but is not shared as a risk contact. depending on personal risk behavior, either more caution or more carelessness is generated by the app. but since the data situation is so inaccurate, for technical reasons on the side of the backend, the app provides the corona deniers and critics of the measures with further arguments of a conspiracy-theoretical nature. it therefore does not create more trust, but more false trust and thus disillusionment and mistrust, insecurity and excessive demands. this includes the exaggerated expectations of data protection without taking the facts of the infection protection law and its two amendments into account in the app design. to a certain extent, one can observe an institutional denial, a libertarian neoliberal design ideology at work, which combines user-centeredness with institutional distrust and a service expectation of the health system, of which one is an active part as a patient or potentially infected person. the effect could be much better if the backend design would have been based on a subset of DEMIS or SORMAS@DEMIS in the roadmap and the data protection debate would have been concerned with the centralization of the health data of this reporting system, instead of establishing an inefficient, virtual and parallel reporting chain, just for this app, which de facto overrides the legal obligation to report because it makes it at least voluntary in the digital for the app. this is above all absurd, because the personal data ends up at the health office anyway, where it could be technically queried for verification in a secure and anonymized way and made available to the app infrastructure via a push procedure. straightening up the positive test reporting process would increase the hit rate from currently about 20% to about 90%, which is roughly equivalent to the confidence in the accuracy of the underlying PCR tests. the app is of less use because it has been proven that for every shared positive result, there are up to three unshared infectious ones in the system, for which no alerts are issued. data protection is not a good excuse for it, because the methods chosen are neither appropriate, nor suitable, nor necessary, and even the legitimacy can be called into question because the reporting obligation is "virtually" circumvented. they are not appropriate because there is a high error rate and there are significant difficulties in adaptation which call into question the overall usefulness of the risk assessment, not in concept but in implementation. they are not necessary because there are simpler ways to access personal data that are just as privacy compliant, while not giving third parties the opportunity to access personal data. the cryptographically secured methods are well known in the design of identification servers and apis. this would simplify the entire data flow as far as the handling of positive test results is concerned, with the aim of drastically reducing the error rate and making the risk warnings more trustworthy. the methods are above all not adequate, because the effort is much too high for the result, it is unreasonable to assume that health offices are less trustworthy than e.g. private telecom call centers, as far as the level of training and authorization is concerned. so it is a health risk to rely on the alerts, not only because of the infected who do not use the app, but especially because of those who use the app but do NOT use the intended function to share the results, or if results are lost along the way, through teletan, qr codes and the surrounding processes. this is the central and fundamental weakness of the app which is poorly and inaccurately justified with PR arguments of acceptability. on the other hand, it should be arguable that concrete and measurable personal utility still creates the best network effects, compared to Fear, Uncertainty, Doubt. it's not the processes around it, but the data flow design of the app that misses the mark in many respects, produces errors, and adds unnecessary overhead. as a former civilian service provider, i know from clinical practice that data from test labs should never be given directly to patients, not least because the tubes are anonymized, and for data protection reasons the labs know nothing about the allocation. a medical test lab is not an online service company for end customers like amazon. the intermediary is the doctor, the health department or at best an invidualizable reporting portal where status reports, further information, contact possibilities and FAQs could be found. you can interpret it like this: the current architecture of the CWA is neoliberal in terms of institutional distance, bypassing the health offices, and the tendency to trust apple/google more than the reporting chains of the health system, on which the effectiveness of the pandemic prevention directly depends without any doubt. the doctor is only legally obligated to the health department. this should be taken into account in the data flow design instead of this virtual, alternative data flow, which produces considerable additional effort and error rates, in the name of a misunderstood data protection or consumer protection. misunderstood because there is a common goal, the protection against infection, for which a data protection-compliant implementation of an infrastructure is sought, in which an app can be an important component. there are FAQs from the health insurance companies that try to explain the very complicated way the CWA works to doctors. the doctors' offices are often overworked and overburdened by the pandemic and are not legally obligated and, as far as I know, are not paid extra for this extra work for the CWA. it would have been sufficient to engage in the data flow elsewhere and find a more data-efficient and effective way to share positive results without sowing fear and distrust in the work and tasks of health departments. so it's wrong to ask doctors to make their crosses better, because that doesn't scale well, instead of understanding that it's the mis-design of the app itself, their requirements for daily practice is a typical example of digital solutionism, a complicated, labyrinthine approach that tries to circumstantially implement certain safety premises that are purely hypothetical and within a pandemic quite unproductive. (state as bad actor). in addition, personal data such as cell phone numbers are stored when the user calls the call center, e.g. to share her positive test result. there is no data protection declaration during the controlled acoustic dialog, although there are simple ways to do so. it is not pointed out which personal data are stored. on the whole, this use-case is unnecessary anyway because the case data are already reported and stored at the health department. the only thing that is missing is a verification server for the respective local health office, which anonymously and privacy compliant triggers the diagnosis key uploads as an anonymizing "privacy firewall". this can be done via api and can take place automatically, as soon as the report is received by the health office, after prior consent. whoever does not consent can then uninstall the app, all associated keys will be deleted, instead of reducing the accuracy of the risk assessment as a false negative by default. the extra effort of QR codes, teletans, test results from the lab is unnecessary and not commensurate with what the app is supposed to do. most importantly, they are incompatible with DEMIS's far too slowly implemented reporting infrastructure culture. upcoming lawful changes in the workflows of the digital reporting infrastructure will most likely render the app even more as a superflous artifact of a particular debate during early 2020, where exagerated expectations met hysterical fears. in front of a perspective of what has been well documented since 2014, a sustainable architecture of the app would have realized elements of the DEMIS architecture, such as the lab reporting workflow plus portal. the web is full of laconic anecdotes about the usage of the app, which is due to the cumbersome process design in the backend, where especially the health department is bypassed or pushed to a marginal position, or often the overload of doctors and offices, but also the expectation of real time behavior, and less service from the labs. how can an app be justified that, on the one hand, obscures a critical and data-protection-oriented examination of the digital infrastructure culture of the RKI and the health care institutions, and, on top of that, sets up a dysfunctional virtual parallel reporting chain that entails an extremely high error rate? a debacle of digitalization, but above all for the critical digital civil society in Germany. 1) performance data https://micb25.github.io/dka/ 2) A mathematical assessment of the efficiency of quarantining and contact tracing in curbing the COVID-19 epidemic https://www.medrxiv.org/content/10.1101/2020.05.04.20091009v1 3) (as an extension, a cluster-oriented strategy is possible, which analyzes and bundles the serial contact chains into graphs. for such a back tracing, however, a change on the level of the exposure notification of G/apple would be necessary to detect more simultaneous connections in a time window and to recursively count edge values to evaluate network centrality of risk potentials, i.e. only local sections of graphs. another use case would allow to use CWA stationary for places, not people). 4) DEMIS https://www.rki.de/DE/Content/Infekt/IfSG/DEMIS/DEMIS_node.html 5) sormas@demis https://www.bundesgesundheitsministerium.de/service/begriffe-von-a-z/o/oeffentlicher-gesundheitsheitsdienst-pakt/digitale-unterstuetzung-gesundheitsaemter.html#c19083 6) results of the new reminder feature https://github.com/corona-warn-app/cwa-wishlist/issues/150 7) recalculating expected numbers of unshared diagnostic keys https://github.com/micb25/dka/issues/21#issuecomment-743406069 8)choose the option which is most complicated with a higher probability of unreliablity: - "A user has been confirmed by a medical provider to have tested positive." - "A health authority authorized a user-initiated self-report based on symptoms (if your app supports this use case)." https://developers.google.com/android/exposure-notifications/exposure-notifications-api#data-structures 9)Barriers to the Large-Scale Adoption of the COVID-19 Contact-Tracing App in Germany https://osf.io/tga2u/#! 10) zero knowledge proof validation https://github.com/xlab-si/emmy 11) Contact Tracing App Privacy: What Data Is SharedBy Europe’s GAEN Contact Tracing Apps https://www.scss.tcd.ie/Doug.Leith/pubs/contact_tracing_app_traffic.pdf Translated with www.DeepL.com/Translator (free version)