ei331 signals and systems
play

EI331 Signals and Systems Lecture 4 Bo Jiang John Hopcroft Center - PowerPoint PPT Presentation

EI331 Signals and Systems Lecture 4 Bo Jiang John Hopcroft Center for Computer Science Shanghai Jiao Tong University March 7, 2019 Contents 1. CT Unit Impulse Function 2. Systems 3. Basic System Properties 3.1 Memory 3.2 Invertibility


  1. EI331 Signals and Systems Lecture 4 Bo Jiang John Hopcroft Center for Computer Science Shanghai Jiao Tong University March 7, 2019

  2. Contents 1. CT Unit Impulse Function 2. Systems 3. Basic System Properties 3.1 Memory 3.2 Invertibility 3.3 Causality 3.4 Stability 3.5 Time Invariance 3.6 Linearity 1/30

  3. CT Unit Impulse Function Also called Dirac delta function or δ function δ ( t ) = lim ∆ → 0 r ∆ ( t ) where r ∆ ( t ) = u ( t + ∆ 2 ) − u ( t − ∆ 2 ) ∆ Idealization for quantities of very large Paul Dirac magnitude but very small duration (from Wikipedia) (e.g. impulse force) or spatial span (e.g. point mass/charge) t By usual calculus � 0 , t � = 0 ∆ → 0 r ∆ ( t ) = lim + ∞ , t = 0 not properly defined at t = 0 2/30

  4. Analogy with Construction of Real Numbers Real numbers • defined by (equivalence classes) of Cauchy sequences in Q • arithmetic: x = { x n } ⊂ Q , y = { y n } ⊂ Q x + y � { x n + y n } , xy � { x n y n } Unit impulse • not ordinary function • singularity (generalized) function • defined by “convergent” sequence of short pulses 3/30

  5. Interpretation of Limit Idea. Define δ in terms of integration For any φ ( t ) continuous at t = 0 , � � δ ( t ) φ ( t ) dt � lim r ∆ ( t ) φ ( t ) dt ∆ → 0 R R By continuity of φ , � ∆ / 2 � r ∆ ( t ) φ ( t ) dt = 1 φ ( t ) dt → φ ( 0 ) ∆ − ∆ / 2 R � ∞ Sampling property δ ( t ) φ ( t ) dt = φ ( 0 ) −∞ 4/30

  6. Other Approximations Can define δ as limit of other functions. Good Good “Bad” t t t D ∆ ( t ) = sin( π t ∆ ) t 2 2 π ∆ e − 1 g ∆ ( t ) = √ 2 ∆ 2 π t Family { K ∆ ( t ) } ∆ > 0 called good kernels or approximation to the identity if � ∞ 1. For all ∆ > 0 , −∞ K ∆ ( t ) dt = 1 � ∞ 2. For some M > 0 and all ∆ > 0 , −∞ | K ∆ ( t ) | dt < M � 3. For every ǫ > 0 , lim | t | >ǫ | K ∆ ( t ) | dx = 0 ∆ → 0 5/30

  7. Properties of Unit Impulse Function Unit “area” � δ ( τ ) d τ = 1 R Proof. Apply sampling property to φ ( t ) = 1 . Relation to u ( t ) � t � δ ( t ) = d δ ( τ ) d τ � u ( t ) = δ ( τ ) u ( t − τ ) d τ, dtu ( t ) −∞ R Proof. For integration, apply sampling property. Note u ( t − τ ) is continuous at τ = 0 for t � = 0 . For differentiation, u ′ ( t ) = lim ∆ → 0 r ∆ ( t ) (will come back later). � b � f ( τ ) d τ � In general, f ( τ )[ u ( τ − a ) − u ( τ − b )] d τ a R 6/30

  8. Transformations of Unit Impulse Usual rules for change of variables hold Time scaling � t � dt � � ⇒ δ ( at ) = 1 δ ( at ) φ ( t ) dt � δ ( t ) φ | a | = | a | δ ( t ) a R R Time reversal � � δ ( − t ) φ ( t ) dt � δ ( t ) φ ( − t ) dt = ⇒ δ ( − t ) = δ ( t ) R R Time shift (general sampling property) � � δ ( t − a ) φ ( t ) dt � δ ( t ) φ ( t + a ) dt = φ ( a ) R R 7/30

  9. Multiplication and Sampling Property Multiplication by ordinary function � � [ x ( t ) δ ( t )] φ ( t ) dt � δ ( t )[ x ( t ) φ ( t )] dt = x ( 0 ) φ ( 0 ) R R Sampling property x δ = x ( 0 ) δ, or x ( t ) δ ( t ) = x ( 0 ) δ ( t ) x τ a δ = x ( a ) τ a δ, or x ( t ) δ ( t − a ) = x ( a ) δ ( t − a ) Just a restatement of the following � � [ x ( t ) δ ( t − a )] φ ( t ) dt = x ( a ) φ ( a ) = [ x ( a ) δ ( t − a )] φ ( t ) dt R R Statements about δ always interpreted this way! 8/30

  10. Derivative of u ( at + b ) u ( t ) u ( t + b ) Chain rule holds 1 1 d t t O O − b dtu ( at + b ) = a δ ( at + b ) δ ( t ) δ ( t + b ) “Proof”. 1 1 1. d dtu ( t + b ) = δ ( t + b ) t t O O − b 2. a > 0 = ⇒ u ( at + b ) = u ( t + b / a ) � � � � dtu ( at + b ) = d d t + b t + b = δ = a δ ( at + b ) dtu a a 3. a < 0 = ⇒ u ( at + b ) = 1 − u ( t + b / a ) � � � � dtu ( at + b ) = − d d t + b t + b = − δ = −| a | δ ( at + b ) dtu a a 9/30

  11. Derivative of x ( t ) u ( t ) Leibniz rule holds For differentiable x , [ x ( t ) u ( t )] ′ = x ′ ( t ) u ( t ) + x ( t ) u ′ ( t ) = x ′ ( t ) u ( t ) + x ( t ) δ ( t ) x ′ ( t ) u ( t ) = + x ( 0 ) δ ( t ) � �� � � �� � ordinary derivative derivative at discontinuity Will see later general procedure for taking derivatives. 10/30

  12. Functions with Jump Discontinuities Example. x x ( t ) = ( 1 − e − 1 3 t )[ u ( t ) − u ( t − 1 )] + u ( t − 1 )  1 0 , t < 0   1 − e − t / 3 , 1 − e − 1 = 0 < t < 1 3   t 1 , t > 1 1 O x ′ ( t ) = 1 x ′ 3 e − 1 3 t [ u ( t ) − u ( t − 1 )] + e − 1 3 δ ( t − 1 ) 1 e − 1 / 3 1 3 1. impulse at each discontinuity t 1 O 2. impulse size equal to jump size 11/30

  13. Contents 1. CT Unit Impulse Function 2. Systems 3. Basic System Properties 3.1 Memory 3.2 Invertibility 3.3 Causality 3.4 Stability 3.5 Time Invariance 3.6 Linearity 12/30

  14. Systems A system takes some input and produces some output. Mathematically, y = T ( x ) for some operator T . x ( t ) CT system y ( t ) x [ n ] DT system y [ n ] Example. Balance of bank account. • Input x [ n ] : net deposit on n -th day • Output y [ n ] : balance at end of n -th day • Input-output relation y [ n ] = ( 1 + r ) y [ n − 1 ] + x [ n ] , r interest rate 13/30

  15. Interconnections of Systems • Complex systems built from interconnected subsystems • Scope of subsystem depends on level of abstraction Basic Types of Interconnections series Input Output System 1 System 2 (cascade) System 1 parallel Input + Output System 2 Input + System 1 Output feedback System 2 14/30

  16. Example x [ n ] σ y [ n ] + W b τ 1 U Subsystems • W : y [ n ] = Wx [ n ] • b : y [ n ] = x [ n ] + b • σ : y [ n ] = σ ( x [ n ]) • U : y [ n ] = Ux [ n ] • τ 1 : y [ n ] = x [ n − 1 ] Composite system (Recurrent neural network) y [ n ] = σ ( Wx [ n ] + Uy [ n − 1 ] + b ) 15/30

  17. Contents 1. CT Unit Impulse Function 2. Systems 3. Basic System Properties 3.1 Memory 3.2 Invertibility 3.3 Causality 3.4 Stability 3.5 Time Invariance 3.6 Linearity 16/30

  18. Memory System is memoryless if output depends only on input at the same time. Example. Identity system y = I ( x ) = x y ( t ) = x ( t ) , y [ n ] = x [ n ] Example. Multiplication by known function y = ax y ( t ) = a ( t ) x ( t ) , y [ n ] = a [ n ] x [ n ] • resistor: v ( t ) = Ri ( t ) • y ( t ) = sin( t + 1 ) x ( t ) memoryless? Yes ! a ( t ) = sin( t + 1 ) not part of input ! Example. Can take complicated form y ( t ) = x 3 ( t ) − 2 x ( t ) + e x ( t ) + sin(cos( x ( t )) + cos( t + 1 ) 17/30

  19. Memory System has memory (non-memoryless) if not memoryless Example. Time shift y = τ a x for a � = 0 y ( t ) = x ( t − a ) , y [ n ] = x [ n − a ] • a > 0 : output depends on past input • a < 0 : output depends on future input (“memory” !) Example. Integrator and accumulator � t n � y ( t ) = x ( τ ) d τ, y [ n ] = x [ k ] −∞ k = −∞ � t • capacitor (used in DRAM !): v ( t ) = −∞ C − 1 i ( τ ) d τ Example. Differentiator x ( t + a ) − x ( t ) y ( t ) = d dtx ( t ) = lim a a → 0 18/30

  20. Invertibility System is invertible if distinct inputs yield distinct outputs Mathematically, system operator T is injective, i.e. ∀ x 1 , x 2 , x 1 � = x 2 = ⇒ T ( x 1 ) � = T ( x 2 ) System is non-invertible if not invertible, i.e. ∃ x 1 , x 2 , x 1 � = x 2 but T ( x 1 ) = T ( x 2 ) Example. Multiplication by known function y = ax • invertible if a ( t ) � = 0 for all t , e.g. y ( t ) = e t x ( t ) • non-invertible if a ( t ) = 0 for some t , e.g. y ( t ) = u ( t ) x ( t ) Example. y ( t ) = x 2 ( t ) is non-invertible, since x 2 = ( − x ) 2 19/30

  21. Invertibility System T 1 is inverse system of system T if cascade of T and T 1 forms identity system, i.e. T 1 ◦ T = I y x T 1 x T I System is invertible iff it has inverse system. Example. y ( t ) = 2 x ( t ) has inverse system y ( t ) = 1 2 x ( t ) n � Example. Inverse system of accumulator y [ n ] = x [ k ] k = −∞ is first difference y [ n ] = x [ n ] − x [ n − 1 ] Caution. Not symmetric. First difference is non-invertible. 20/30

  22. Causality System is causal if output at any time depends only on input values up to that time Also called nonanticipative, i.e. output at any time does not depend on (anticipate) future input values Example. First difference • backward difference is causal y [ n ] = x [ n ] − x [ n − 1 ] • forward difference is noncausal y [ n ] = x [ n + 1 ] − x [ n ] Example. Moving average is noncausal M 1 � y [ n ] = x [ n − k ] , M ≥ 1 2 M + 1 k = − M Example. y ( t ) = sin( t + 1 ) x ( t ) is causal 21/30

  23. Causality • For causal systems, identical inputs up to some time yield identical outputs up to the same time x 1 ( t ) = x 2 ( t ) for t ≤ t 0 = ⇒ ( Tx 1 )( t ) = ( Tx 2 )( t ) for t ≤ t 0 x 1 [ n ] = x 2 [ n ] for n ≤ n 0 = ⇒ ( Tx 1 )[ n ] = ( Tx 2 )[ n ] for n ≤ n 0 • Causality is important when t (or n ) is time ◮ real-time physical systems are causal, cause before effect ◮ non-real-time systems can be noncausal, e.g. postprocessing of recorded signals M M 1 1 � � y [ n ] = x [ n − k ] , y [ n ] = x [ n − k ] vs. 2 M + 1 M + 1 k = − M k = 0 noncausal causal ◮ not meaningful if t (or n ) is spatial variable 22/30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend