Week 2 - Monday C# MonoGame Program creates a Game1 (or similar) - - PowerPoint PPT Presentation

week 2 monday c monogame program creates a game1 or
SMART_READER_LITE
LIVE PREVIEW

Week 2 - Monday C# MonoGame Program creates a Game1 (or similar) - - PowerPoint PPT Presentation

Week 2 - Monday C# MonoGame Program creates a Game1 (or similar) object and starts it running Game1 has: Initialize() LoadContent() Update() Draw() It runs an update-draw loop continuously until told to exit


slide-1
SLIDE 1

Week 2 - Monday

slide-2
SLIDE 2

 C#  MonoGame

slide-3
SLIDE 3
slide-4
SLIDE 4
slide-5
SLIDE 5
slide-6
SLIDE 6
slide-7
SLIDE 7

 Program creates a Game1 (or similar) object and starts it

running

 Game1 has:

  • Initialize()
  • LoadContent()
  • Update()
  • Draw()

 It runs an update-draw loop continuously until told to exit

slide-8
SLIDE 8

 We're used to interacting with programs from the command line (console)  MonoGame was not designed with this in mind

  • It has pretty easy ways to read from the keyboard, the mouse, and also Xbox

controllers

 But you'll need a console for Project 1 so that you can tell it which file to

load and what kind of manipulations to perform on it

 So that Console.Write() and Console.Read() work

  • Go to the Properties page for your project
  • Go to the Application tab
  • Change Output Type to Console Application

 More information: http://rbwhitaker.wikidot.com/console-windows  You'll need a separate thread to read and write to the console if you don't

want your game to freeze up

slide-9
SLIDE 9

 To draw a picture on the screen, we need to load it first  Inside a MonoGame project, right-click the Content.mgcb file and

choose Open with…

  • Select MonoGame Pipeline Tool
  • Add and then Existing Item…
  • Find an image you want on your hard drive
  • Make sure the Build Action is Build
  • The Importer should be Texture Importer - MonoGame

 Create a Texture2D member variable to hold it

  • Assume the member variable is called cat and the content is called cat.jpg

 In LoadContent(), add the line:

cat = Content.Load<Texture2D>("cat.jpg");

slide-10
SLIDE 10

 Now the variable cat contains a loaded 2D texture  Inside the Draw() method, add the following code:  This will draw cat at location (x, y)  All sprites need to be drawn between Begin() and End()

spriteBatch calls

spriteBatch.Begin(); spriteBatch.Draw(cat, new Vector2(x, y), Color.White); spriteBatch.End();

slide-11
SLIDE 11

 Modern TrueType and OpenType fonts are vector descriptions

  • f the shapes of characters
  • Vector descriptions are good for quality, but bad for speed

 MonoGame allows us to take a vector-based font and turn it

into a picture of characters that can be rendered as a texture

  • Just like everything else
slide-12
SLIDE 12

 Inside a MonoGame project, right-click the Content.mgcb file and

choose Open with…

 Select MonoGame Pipeline Tool  Right click on Content in the tool, and select Add -> New Item…  Choose SpriteFont Description and give your new SpriteFont a name  Open the spritefont file, choosing a text editor like Notepad++  By default, the font is Arial at size 12

  • Edit the XML to pick the font, size, and spacing
  • You will need multiple Sprite Fonts even for different sizes of the same font

 Repeat the process to make more fonts  Note: fonts have complex licensing and distribution requirements

slide-13
SLIDE 13

 Load the font similar to texture content  Add a DrawString() call in the Draw() method:

spriteBatch.Begin(); spriteBatch.DrawString(font, "Hello, World!", new Vector2(100, 100), Color.Black); spriteBatch.End(); font = Content.Load<SpriteFont>("Text");

slide-14
SLIDE 14

 They "float" above the background like

fairies…

 Multiple sprites are often stored on one

texture

 It's cheaper to store one big image than a

lot of small ones

 This is an idea borrowed from old video

games that rendered characters as sprites

slide-15
SLIDE 15

 It is possible to apply all kinds of 3D transformations to a

sprite

  • A sprite can be used for billboarding or other image-based

techniques in a fully 3D environment

 But, we can also simply rotate them using an overloaded call

to Draw()

spriteBatch.Draw(texture, location, sourceRectangle, Color.White, angle,

  • rigin, 1.0f, SpriteEffects.None, 1);
slide-16
SLIDE 16

 texture:

Texture2D to draw

 location:

Location to draw it

 sourceRectangle

Portion of image

 Color.White

Full brightness

 angle

Angle in radians

 origin

Origin of rotation

 1.0f

Scaling

 SpriteEffects.None

No effects

 1

Float level

slide-17
SLIDE 17

 For API design, practical top-down problem solving, and

hardware design, and efficiency, rendering is described as a pipeline

 This pipeline contains three conceptual stages:

Produces material to be rendered

Application

Decides what, how, and where to render

Geometry

Renders the final image

Rasterizer

slide-18
SLIDE 18
slide-19
SLIDE 19
slide-20
SLIDE 20

 The output of the Application Stage is polygons  The Geometry Stage processes these polygons using the

following pipeline:

Model and View Transform Vertex Shading Projection Clipping Screen Mapping

slide-21
SLIDE 21

 Each 3D model has its own coordinate system called model

space

 When combining all the models in a scene together, the

models must be converted from model space to world space

 After that, we still have to account for the position of the

camera

slide-22
SLIDE 22

 We transform the models into camera space or eye space

with a view transform

 Then, the camera will sit at (0,0,0), looking into negative z  The z-axis comes out of the screen in the book's examples and

in MonoGame (but not in older DirectX)

slide-23
SLIDE 23

 Figuring out the effect of light on a material is called shading  This involves computing a (sometimes complex) shading

equation at different points on an object

 Typically, information is computed on a per-vertex basis and

may include:

  • Location
  • Normals
  • Colors
slide-24
SLIDE 24

 Projection transforms the view

volume into a standardized unit cube

 Vertices then have a 2D location

and a z-value

 There are two common forms of

projection:

  • Orthographic: Parallel lines stay

parallel, objects do not get smaller in the distance

  • Perspective: The farther away an
  • bject is, the smaller it appears
slide-25
SLIDE 25

 Clipping process the polygons based on their location relative to the view

volume

 A polygon completely inside the view volume is unchanged  A polygon completely outside the view volume is ignored (not rendered)  A polygon partially inside is clipped

  • New vertices on the boundary of the volume are created

 Since everything has been transformed into a unit cube, dedicated hardware

can do the clipping in exactly the same way, every time

slide-26
SLIDE 26

 Screen-mapping transforms the x and y coordinates of each polygon from the

unit cube to screen coordinates

 A few oddities:

  • DirectX has weird coordinate systems for pixels where the location is the center of the

pixel

  • DirectX conforms to the Windows standard of pixel (0,0) being in the upper left of the

screen

  • OpenGL conforms to the Cartesian system with pixel (0,0) in the lower left of the screen
slide-27
SLIDE 27
slide-28
SLIDE 28

 Rendering pipeline

  • Rasterizer stage
slide-29
SLIDE 29

 Keep reading Chapter 2  Want a Williams-Sonoma internship?

  • Visit http://wsisupplychain.weebly.com/

 Interested in coaching 7-18 year old kids in programming?

  • Consider working at theCoderSchool
  • For more information:

▪ Visit https://www.thecoderschool.com/locations/westerville/ ▪ Contact Kevin Choo at kevin@thecoderschool.com ▪ Ask me!