Improved Bounds on the Dot Product under Random Projection and Random Sign Projection
Ata Kab´ an School of Computer Science The University of Birmingham Birmingham B15 2TT, UK http://www.cs.bham.ac.uk/∼axk
KDD 2015, Sydney, 10-13 August 2015.
Improved Bounds on the Dot Product under Random Projection and - - PowerPoint PPT Presentation
Improved Bounds on the Dot Product under Random Projection and Random Sign Projection Ata Kab an School of Computer Science The University of Birmingham Birmingham B15 2TT, UK http://www.cs.bham.ac.uk/ axk KDD 2015, Sydney, 10-13
KDD 2015, Sydney, 10-13 August 2015.
i.i.d
R = {(Rxn, yn)}N n=1,
h∈H
N
N
n=1 1 (h(xn)yn ≥ ρ)
8
ρ
k
xn
2 +
Illustration of the predictive behaviour of the bound (δ = 0.1 and ρ = 0.05) on the Advert classification data set from the UCI (d = 1554 features and N = 3279 points). The empirical error was estimated on holdout sets using SVM with default settings and 30 random splits (in proportion 2/3 training & 1/3 testing) of the data. We standardised the data first, and scaled it to max
n∈{1,...,N}xn = 1.
[Achlioptas] D. Achlioptas. Database-friendly Random Projections: Johnson-Lindenstrauss with Binary Coins. Journal of Computer and System Sciences, 66(4):671–687, 2003. [Balcan & Blum] M.F. Balcan, A. Blum, S. Vempala. Kernels as features: On kernels, margins, and low-dimensional mappings, Machine Learning 65 (1), 79-94, 2006. [Bingham & Mannila] E. Bingham and H. Mannila. Random projection in dimensionality reduction: Applications to image and text data. In Knowledge Discovery and Data Mining (KDD), pp. 245–250, ACM Press, 2001. [Buldygin & Kozachenko] V.V. Buldygin, Y.V. Kozachenko. Metric characterization of ran- dom variables and random processes. American Mathematical Society, 2000. [Dasgupta & Gupta] S. Dasgupta, and A. Gupta. An elementary proof of the Johnson– Lindenstrauss Lemma. Random Structures & Algorithms, 22:60–65, 2002. [Durrant & Kab´ an] R.J. Durrant, A. Kab´
projected classifiers. ICML’13, Journal of Machine Learning Research-Proceedings Track 28(3):693-701, 2013. [Larsen & Nelson] K.G. Larsen, J. Nelson. The Johnson-Lindenstrauss lemma is optimal for linear dimensionality reduction, arXiv preprint arXiv:1411.2404, 2014. [Li et al.] P. Li, T. Hastie, K. Church. Improving random projections using marginal infor-
[Shi et al.] Q. Shi, C. Shen, R. Hill, A. Hengel. Is margin preserved after random projection? Proceedings of the 29th International Conference on Machine Learning (ICML), pp. 591–598, 2012.