Breath Analysis for Medical Applications by David Zhang Dongmin Guo & Ke Yan

Breath Analysis for Medical Applications by David Zhang Dongmin Guo & Ke Yan

Author:David Zhang, Dongmin Guo & Ke Yan
Language: eng
Format: epub
Publisher: Springer Singapore, Singapore


TMTL-serial

97.35

98.80

90.06

98.48

95.35

91.50

91.84

96.38

71.56

92.37

Joint train

59.41

56.33

58.63

37.26

44.77

43.17

20.72

34.26

35.55

43.34 ± 1.14

DCAE-basic

97.58

96.82

88.57

98.73

95.40

94.76

91.19

95.66

74.60

92.59 ± 0.61

DCAE-CL

97.47

96.13

90.50

98.58

94.94

95.47

92.62

96.40

76.82

93.21 ± 0.52

From Fig. 8.4, we can find that the accuracy improves as increases, but reaches a plateau when . The performance of variable standardization and HHTL is not promising, which is because the capacity of the feature-level correction methods is insufficient when facing complex drift. The fact that HHTL is better than variable standardization implies the augmented feature vector learned by mSDA is better than the original features. DCAE outperforms latent space, because it explicitly models and corrects drift, thus avoids information loss caused by the tangle between drift and useful information. Similar to the model-level methods like RMTL and TCTL, DCAE considers the discriminative information when correcting drift. Better yet, it can capture nonlinear and more abstract structures in data. The learned representation can be used in various prediction models , which makes it more convenient to use. The two DCAE variants outperform other methods especially when is smaller.

More results of existing methods are listed in Table 8.1. “No transfer” means the prediction model trained on batch 1 is directly applied on batches k (). The unsatisfactory performance proves the influence of drift. The two traditional methods based on component correction (CC-PCA and OSC) do not achieve large improvement because they rely on clear separation of drift and useful information in data. The results of ensemble, source domain adaptation extreme learning machine (DAELM-S), and manifold regularization with combination geodesic flow kernel (ML-comGFK) are copied from the original papers. DAELM-S achieves good results with the help of 30 selected labeled samples in each batch. Our proposed methods have a higher accuracy with only 10 transfer samples needed. The last three rows are results obtained by deep networks. The standard deviation of the average accuracy of the 10 runs is also calculated. In “joint train”, samples in batch 1 and k are pooled together to train a stacked denoising autoencoder, as suggested by Glorot et al. (2011). The poor performance indicates that this strategy is not suitable in drift correction. Among the results of DCAE-basic and DCAE-CL, the former method is better in earlier batches whereas the latter wins in latter batches which have larger drift. It indicates that DCAE-CL is preferable when the time-varying drift is more complex. DCAE broke the record of TMTL in the last chapter possibly because it can learn more abstract and nonlinear representation from data.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.