steganalysis in high dimensions fusing classifiers built
play

Steganalysis in high dimensions: Fusing classifiers built on random - PowerPoint PPT Presentation

Steganalysis in high dimensions: Fusing classifiers built on random subspaces Jan Kodovsk, Jessica Fridrich January 25, 2011 / SPIE Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 1 / 14 Motivation Modern


  1. Steganalysis in high dimensions: Fusing classifiers built on random subspaces Jan Kodovský, Jessica Fridrich January 25, 2011 / SPIE Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 1 / 14

  2. Motivation Modern steganography – Minimizing a distortion function in a high dimensional feature space Example: HUGO [Pevný-2010] (spatial domain) – 10 7 dimensions – Preserving complex models Example: Optimized ± 1 embedding (JPEG domain) [Filler-Yesterday] Modern approach to steganalysis – Needs to follow the suit and capture more and more statistics – Cartesian calibration [2009] – doubles dimensionality – Merging of existing features together – ± 1 embedding − → SPAM features (686) [Pevný-2009] – YASS algorithm (JPEG domain) − → CDF features (1,234) [2010] Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 2 / 14

  3. Curse of dimensionality Growing complexity of training Limited training data / no access to the cover source Degradation of generalization abilities (overtraining) ⇒ model assumptions / regularization Problems with data / memory management Saturation of performance below its potential Features are designed to have low dimensionality Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 3 / 14

  4. Our goals Challenge the low-dimensional limitation for a feature design Replace human design of features with an automatized procedure Rethink machine learning approach to steganalysis Classify in very high dimensions with low complexity and without compromising the performance Improve state-of-the-art steganalysis Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 4 / 14

  5. What are the options? 1. Apply a classification tool of choice directly 2. Reduce dimensionality and then classify Unsupervised techniques (PCA) Supervised techniques (feature extraction / selection) Can be thought of as part of the feature design 3. Reduce dimensionality and simultaneously classify Minimize an appropriately defined objective function (SVDM) Iterative process with a classification feedback (embedded methods) 4. Ensemble methods Reduce dimensionality randomly and construct a simple classifier Repeat L times and aggregate the individual decisions Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 5 / 14

  6. The proposed framework Step 1 – Form high-dimensional prefeatures Capture as many dependencies among cover elements as possible Don’t be restricted by a dimensionality Emphasize diversity of individual features Step 2 – Classify in high dimensions using an ensemble approach repeat L times dim. d dim. k ≪ d random classification subspace random classification prefeatures subspace classifier fusion high dimension random classification subspace Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 6 / 14

  7. Specific implementation Random subspace = random selection (without repetition) ⇒ The complexity does not depend on the dimensionality d Individual classifiers (base learners) – Need to be sufficiently diverse (need to make different errors) – Weak and unstable classifiers preferable – Our choice: Fisher Linear Discriminants (FLDs) Fusion = majority voting scheme � L i =1 decision( i ) > threshold Parameters k ≈ 300 – 3000, L ≈ 30 – 150 Relation to previous art: [Freund-1999] – Boosting (aggregation of weak classifiers) [Breiman-2001] – Random forests (base learners = trees) Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 7 / 14

  8. Comparison with SVM JPEG domain, algorithm nsF5, database of 6500 images State-of-the-art feature sets – CC-PEV (2 × 274 = 548) – [Pevný-2007] + Cartesian calibration – CC-SHI (2 × 324 = 648) [Shi-2006] Cartesian calibration 0 . 4 G-SVM Ensemble 0 . 3 – k = 400 , L = 31 Testing error – Ensemble: 70 sec 0 . 2 – G-SVM: 250 sec 0 . 1 CC-PEV (548) (3.5 × longer) 0 Full training: 8 hrs! 0.05 0.10 0.15 0.20 Relative payload (bpac) Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 8 / 14

  9. Comparison with SVM JPEG domain, algorithm nsF5, database of 6500 images State-of-the-art feature sets – CC-PEV (2 × 274 = 548) – [Pevný-2007] + Cartesian calibration – CC-SHI (2 × 324 = 648) – [Shi-2006] + Cartesian calibration 0 . 4 G-SVM Ensemble 0 . 3 – k = 400 , L = 31 Testing error CC-SHI (648) – Ensemble: 70 sec 0 . 2 – G-SVM: 250 sec 0 . 1 CC-PEV (548) (3.5 × longer) 0 Full training: 8 hrs! 0.05 0.10 0.15 0.20 Relative payload (bpac) Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 8 / 14

  10. Generating high-dimensional prefeatures (in JPEG domain) DCT Plane 8 × 8 grid – 2D co-occurence matrices – Driven by mutual information – N matrices in total – Truncated to [ − T, T ] – Cartesian calibration – Dimension 2 × N × ( 2 × T + 1 ) 2 intra-block dependencies – T = 4, N = 300 → dim = 48,600 inter-block dependencies CC-CF features combination of both Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 9 / 14

  11. Steganalysis of nsF5 Influence of parameters L and k 0 . 43 k = 1000 – Payload 0.05 bpac k = 2000 0 . 4 Testing error k = 3000 – k = 2000 , L = 149 0 . 37 → 30 min CC-CF (48,600) 0 . 34 – G-SVM: 7.5 hrs 0 . 31 (15 × longer) 0 30 60 90 120 150 Full training > month Number of fused classifiers L – Performance quickly saturates as L grows – Choice of k is important (1D search may be conducted) Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 10 / 14

  12. Steganalysis of nsF5 Can we improve state-of-the-art? 0 . 4 CC-PEV (548) – CC-PEV: G-SVM CC-CF (48,600) 0 . 3 Testing error – Rest: Ensemble 0 . 2 k = 2000 , L = 149 0 . 1 0 0.05 0.10 0.15 0.20 Relative payload (bpac) Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 10 / 14

  13. Steganalysis of nsF5 Can we improve state-of-the-art? 0 . 4 CC-PEV (548) – CC-PEV: G-SVM CC-CF (48,600) 0 . 3 Testing error ALL (49,796) – Rest: Ensemble 0 . 2 k = 2000 , L = 149 0 . 1 0 0.05 0.10 0.15 0.20 Relative payload (bpac) – ALL (49,796) = CC-PEV (548) + CC-SHI (648) + CC-CF (48,600) Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 10 / 14

  14. Steganalysis of nsF5 Can we improve state-of-the-art? 0 . 4 CC-PEV (548) – CC-PEV: G-SVM CC-CF (48,600) 0 . 3 Testing error ALL (49,796) – Rest: Ensemble ALL+ (49,796) 0 . 2 k = 2000 , L = 149 0 . 1 0 0.05 0.10 0.15 0.20 Relative payload (bpac) – ALL (49,796) = CC-PEV (548) + CC-SHI (648) + CC-CF (48,600) – ALL+ = ALL with 300/2000 always chosen from CC-PEV Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 10 / 14

  15. Generating high-dimensional prefeatures (in SPATIAL domain) Modeling the joint distribution of higher order local residuals Horizontal residual H ij = x ij − Pred( N h ij ) N h x ij ij Order H ij 2 1 2 ( − x i,j − 1 + 2 x ij − x i,j +1 ) 3 1 3 ( − x i,j − 1 + 3 x ij − 3 x i,j +1 + x i,j +2 ) 4 1 6 ( x i,j − 2 − 4 x i,j − 1 + 6 x ij − 4 x i,j +1 + x i,j +2 ) 1 5 10 ( x i,j − 2 − 5 x i,j − 1 + 10 x i,j − 10 x i,j +1 + 5 x i,j +2 − x i,j +3 ) 6 1 20 ( − x i,j − 3 + 6 x i,j − 2 − 15 x i,j − 1 + 20 x ij − 15 x i,j +1 + 6 x i,j +2 − x i,j +3 ) Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 11 / 14

  16. Generating high-dimensional prefeatures (in SPATIAL domain) Modeling the joint distribution of higher order local residuals Horizontal residual H ij = x ij − Pred( N h ij ) N v H ij = x ij − Pred( N h D ij = x ij − Pred( N d ij ij ) ij ) N d ij V ij = x ij − Pred( N v M ij = x ij − Pred( N m ij ) ij ) N h ij x ij H ij , V ij , D ij , M ij − → MARKOV N m min { H ij , V ij , D ij , M ij } ij − → MINMAX max { H ij , V ij , D ij , M ij } 3D co-occurences, dimension 20 × ( 2 × T + 1 ) 3 ( T = 4 → dim = 14,580) Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 11 / 14

  17. Steganalysis of HUGO G-SVM − → CDF (1,234) = CC-PEV (548) + SPAM (686) Ensemble − → MINMAX+MARKOV (14,580), k = 1600 , L = 51 0 . 5 0 . 4 G-SVM (CDF) Testing error 0 . 3 Ensemble 0 . 2 (MINMAX+MARKOV) 0 . 1 0 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 Relative payload (bpp) BOSSbase (9074 images) size: 512 × 512, resized Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 12 / 14

  18. Summary The main contributions for future steganalysis High dimensionality doesn’t have to be a restriction for the feature design Proposed scalable, fast, and simple classification methodology based on ensemble classifiers One step further towards automatization of steganalysis Showed that state-of-the-art steganalysis can be improved by a large margin Open problems How to design prefeatures? How to define random projections? Steganalysis in high dimensions:, Fusing classifiers built on random subspaces 13 / 14

  19. The power of random projections Shigeo Fukuda, Lunch With a Helmet On (1987)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend