{"id":456,"date":"2020-12-31T17:18:06","date_gmt":"2020-12-31T22:18:06","guid":{"rendered":"http:\/\/www.matthewmosher.com\/studentwork\/?p=456"},"modified":"2020-12-09T17:22:30","modified_gmt":"2020-12-09T22:22:30","slug":"creative-code-motion-detection-in-max","status":"publish","type":"post","link":"https:\/\/mosher.art\/studentwork\/university-of-central-florida\/creative-code-motion-detection-in-max\/","title":{"rendered":"Creative Code: Motion Detection in Max"},"content":{"rendered":"<div class=\"description user_content teacher-version enhanced\">\n<p>GOAL: To experiment with non-traditional interface input mechanisms, such as motion tracking.<\/p>\n<p>OBJECTIVE: Build a Max program that plays audio and\/or video based on user motion.\u00a0 Use presentation mode to design a functional and aesthetically pleasing user interface for your application.<\/p>\n<p>PROCESS:<\/p>\n<p>In brief, we will be looking at how to detect motion within a camera&#8217;s field of view.\u00a0 So the first thing you need to ask yourself is what kind of app do you want to build?\u00a0 Things like an &#8216;air&#8217; drum kit, a swipe based video switcher, a burglar alarm, an art installation, etc are all within the realm of possibility.\u00a0 Brainstorm and sketch some ideas about how you could use user motion to drive and audio\/visual system and why you would want to do so.\u00a0 You should make at least one role based prototype for your idea as well (storyboard or scenario).<\/p>\n<p>To get rolling with the code we&#8217;ll need to get camera input and detect motion. Of course, there&#8217;s <a class=\"instructure_file_link\" title=\"SuperPatch2020-3a.maxpat\" href=\"https:\/\/webcourses.ucf.edu\/courses\/1361224\/files\/82450386\/download?wrap=1\" data-api-endpoint=\"https:\/\/webcourses.ucf.edu\/api\/v1\/courses\/1361224\/files\/82450386\" data-api-returntype=\"File\">an updated super patch<\/a>.<\/p>\n<p><iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/m7mYKcoqKag\" width=\"560\" height=\"314\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><\/iframe><\/p>\n<p>While motion detection will let us know if something (anything) in the cameras view has changed, it works only moment to moment.\u00a0 Standing still can trick it into thinking there is nothing there.\u00a0 Presence detection collects a running average of the environment to use as a subtraction frame.\u00a0 This will let us know if something new (a person, cat, package, etc) is in the environment, even if it is still.\u00a0 For our presence detection we&#8217;ll also need to install a package or library.<\/p>\n<p><iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/fKzrAtbfIvo\" width=\"560\" height=\"314\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><\/iframe><\/p>\n<p>Both those techniques work on the camera&#8217;s entire field of view, but what if we want to monitor a specific area, like a door way, or multiple areas? We can use the scissors object to cut up the camera frame and run our tracking on each section of it:<\/p>\n<p><iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/-BM-SZTbsQc\" width=\"560\" height=\"314\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><\/iframe><\/p>\n<p>You already know how to play video and audio files,\u00a0 for a refresher see the previous modules. But what about images?\u00a0 Or UI buttons?<\/p>\n<p><iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/ISA6yibtb4Y\" width=\"560\" height=\"314\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><\/iframe><\/p>\n<p>One thing that may be helpful though would be some data smoothing.\u00a0 You&#8217;ll notice that the motion data jumps around quite a bit due to signal noise in the system. We can smooth some of this out with filtering.\u00a0 Finally, lets make out code look good with presentation mode.<\/p>\n<p><iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/bCUmtXyyC08\" width=\"560\" height=\"314\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><\/iframe><\/p>\n<p>Requirements:<\/p>\n<ul>\n<li>Brainstorm and make a role prototype for your motion\/presence detection project.<\/li>\n<li>Use [jit.qt.grab] or the Vizzie Grabbr for input.<\/li>\n<li>Use [jit.scissors] to break your video grab into at least a 2 X 2 grid.<\/li>\n<li>Use [jit.movie], [sfplay~], and\/or [jit.playlist] for output.\u00a0 All media files should auto-load using filename attributes or loadbangs. All files total must be less than 250GB.<\/li>\n<li>Use motion\/presence detection via frame differencing to control parameters of the output using [jit.op @absdiff].<\/li>\n<li>Implement an user adjustable threshold on the motion\/presence with [jit.op @op &gt;].<\/li>\n<li>Use encapsulation to keep your code organized.<\/li>\n<li>Use comments to explain the functionality of the code.\u00a0 All user input elements should be labeled.<\/li>\n<li>Use comments to explain the concept behind your piece.\u00a0 This should be a full paragraph.\u00a0 Include a question about your work that you&#8217;d like the peer reviewers to address.<\/li>\n<li>Use presentation mode do show the user a clean version of your UI with just the elements they need to interact with it.\u00a0 Anything that is not user facing should not be in presentation mode, such as all your encapsulated code.\u00a0 You can make your patch open in presentation mode by going to the inspector &gt; click the &#8216;P&#8217; icon at the top center of the inspector pane &gt; select the Basic tab &gt; check the Open in Presentation box.<\/li>\n<li>Be interesting.<\/li>\n<li>Create a 1-2minute video screen capture demonstration of your code with audio narration explaining how it works and who it is for.<\/li>\n<\/ul>\n<p>Optional Considerations<\/p>\n<ul>\n<li>Use [send] &amp; [receive] to help clean up your patch (see the super patch)<\/li>\n<li>Control the flow of data using gates and switches (see the super patch)<\/li>\n<li>Try implementing data smoothing with [slide] (see the super patch)<\/li>\n<li>Consider camera placement.\u00a0 What would it mean for a camera to be ceiling mounted and thus tracking movement across the floor of a room or stage? A camera mounted over a desk tracking a users hands?\u00a0 Face on seated does not have to be the default.<\/li>\n<\/ul>\n<p>Save your max patch and all the video and\/or audio clips in the same folder.\u00a0 Add a scan\/photo\/copy of your role prototype to the folder as &#8216;role.pdf&#8217;.\u00a0 Place your video explanation in this folder as &#8216;demo.mp4&#8217;. Name this folder Motion<em>YourName.<\/em> \u00a0Submit a zip archive of this folder here. \u00a0(You can create a zip archive on Mac by right clicking the folder and selecting &#8216;Compress&#8217;.)<\/p>\n<p>As your Max license may be expiring soon, you might want to create an executable of your projects so that you can run them in the future.\u00a0 This essentially turns a patch into a standalone app.\u00a0 There is a great video series on this (parts <a class=\"external\" href=\"https:\/\/cycling74.com\/tutorials\/advanced-max-standalones-part-1\" target=\"_blank\">1<\/a>, <a class=\"external\" href=\"https:\/\/cycling74.com\/tutorials\/advanced-max-standalones-part-2\" target=\"_blank\">2 <\/a>, 3, and <a class=\"external\" href=\"https:\/\/cycling74.com\/tutorials\/advanced-max-standalones-part-4\" target=\"_blank\">4<\/a>) that gets into the details of optimization, creating icons, reducing file size, etc.\u00a0 But really you get everything you need in part 1.\u00a0 This is entirely optional.<\/p>\n<p>STUDENT WORK<\/p>\n<p><iframe loading=\"lazy\" title=\"Creative Code Motion Detection\" width=\"620\" height=\"349\" src=\"https:\/\/www.youtube.com\/embed\/Ry2TARvduFg?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>GOAL: To experiment with non-traditional interface input mechanisms, such as motion tracking. OBJECTIVE: Build a Max program that plays audio and\/or video based on user motion.\u00a0 Use presentation&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20,6],"tags":[40,65,91,33],"class_list":["post-456","post","type-post","status-publish","format-standard","hentry","category-dig2500","category-university-of-central-florida","tag-4d","tag-lower-level","tag-max","tag-undergraduate"],"_links":{"self":[{"href":"https:\/\/mosher.art\/studentwork\/wp-json\/wp\/v2\/posts\/456","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mosher.art\/studentwork\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mosher.art\/studentwork\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mosher.art\/studentwork\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mosher.art\/studentwork\/wp-json\/wp\/v2\/comments?post=456"}],"version-history":[{"count":1,"href":"https:\/\/mosher.art\/studentwork\/wp-json\/wp\/v2\/posts\/456\/revisions"}],"predecessor-version":[{"id":457,"href":"https:\/\/mosher.art\/studentwork\/wp-json\/wp\/v2\/posts\/456\/revisions\/457"}],"wp:attachment":[{"href":"https:\/\/mosher.art\/studentwork\/wp-json\/wp\/v2\/media?parent=456"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mosher.art\/studentwork\/wp-json\/wp\/v2\/categories?post=456"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mosher.art\/studentwork\/wp-json\/wp\/v2\/tags?post=456"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}