Method Precipitating Dread [PPT-Dread] In this short documentation, I will give an overview of the various parts of this project. Generally speaking, there are three significant components, the archive itself, the language analysis methods applied to the tweets and, most significantly, the linear gradient lattices, the creation of which is the primary purpose of this undertaking. While some external libraries and modules were used, all that can be seen here is the product of my own code. Both the images and text and the html of this website were generated on a Linux system using Perl and bash scripts of my own creation. PPT-Dread is a conceptual info architectural structure. For more background regarding the ideals behind my work, please refer to: The Tools and Labour of the Info Architect. Through this info architecture, every tweet that President Donald Trump published during his tenure is subjected to a basic analysis process that is comprised of both sentiment and content analysis. While I make use of pre-existing sentiment analysis tools, I have created a content analysis module specifically for this project. Both processes in conjunction are used to create an octagonal shape which in turn determines the distribution, density and source material used to generate the linear gradient lattices that were also developed specifically for this project. #Sentiment Analysis This matches all words used in the tweet against a list of words associated with the following emotions: - anger - anticipation - disgust - fear - joy - sadness - surprise - trust - A total tally is compiled for each tweet. This is done using the old but perfectly suitable Lingua::En::Opinion Perl module, created by Gene Boggs, which draws upon the NRC Word-Emotion Association Lexicon. Interestingly, this module seems to have been originally designed to do sentiment analysis on the Book of Revelation. The results also save a total of negative [anger - disgust - fear - sadness] and positive emotions [anticipation - joy - surprise - trust]. If one side dominates, it is saved as the primary em-value, the other as the secondary value. If both are equal, one side is chosen as dominant randomly (then it doesn't matter anyway). #Content Analysis This content analysis module I wrote accomplishes three processes; topic mapping, word processing and evaluation, using multi-dimensional arrays of hashes. There are 9 of these arrays in total. These are Grouping 0 and Grouping A … I; each grouping contains a general topic with a variable number of subtopic objects composed of words and their respective hash values. After matching them against the Tweet, the output of all groupings is gathered and then assessed further to determine the relevance and contextual background of the text input. ##Grouping 0 This grouping sits apart from the rest since it does not contain arrays relevant to the recognition of atmospheric dread. Its purpose is to determine the presence of insults, boasts and complaints, as well as matches that may indicate irrelevance. These findings are later used for a more nuanced assessment of the results procured by the primary groupings. ##Grouping A...I Each grouping contains an array of topics that in turn contain an array of words that are each associated with a hash of values. So, each grouping is, as mentioned before, a multidimensional array of hashes. Actually, the hashes themselves are also designed to have multiple possible values. A function, however, that is not currently used to a great extent. This is a general overview of the topics contained in each grouping: A: War, the military and the nuclear threat B: Terrorism, islamophobia and the 'war on terror' C: Economy, economic warfare, fiscal policy and employment D: Healthcare, the opiate crisis and the pandemic E: Racism, immigration, and border control F: The justice system, institutional violence, crime, and gun control. G: Climate change, environmental issues and natural catastrophes H: Populists, dictators and atrocities I: Divisive content, election denial and the insurrection ##Assessment of Results The words of each subtopic object are compared with the Tweet and, if there is a match, their hash values are transferred into the findings array. This findings array, also a multidimensional array of hashes, is then run though a filter that generates a modifier for each grouping, which in turn is then used to determine the dominant grouping for each Tweet. The filter that applies the modifiers takes into account word type, significance, correlation of objects and groupings, grouping priority, the presence of specified image sources, as well as the date of creation. Some matches are only relevant at a certain time, indicated by [0] in the analysis display if inactive. The final modifier, which is applied to the results of the sentiment analysis to generate the octagonal shape, is calculated in the following manner: [ [ ( $prime_em_sett * $active_mod ) + $sec_em_sett ] * $content_modifyer ] / 8.5 $active_mod is the modifier of the dominant grouping. $prime_em_sett and $sec_em_sett are the two totals found during sentiment analysis, and the $content_modifyer value is the final value derived from all grouping modifiers, divided by their respective match and filter incidence (as well as several other factors, such as grouping weighting and possible results from grouping 0). Finally, these are divided by 8.5 [(8 primary emotions + 9 topic groupings)/2]. The result is then applied to the octagonal shape to distort it using the eight emotion values from sentiment analysis as basis. I admit that there are still some balancing issues here, but I like the gradient lattice results generated using highly distorted shapes, which is why I think it's fine like this for now, even if I plan to work on it more in the future. #Analysis Display The respective tweets octagonal representation is displayed alongside a detailed documentation of the process that produced it. At the top, the creation date of the tweet is followed by a colored text block containing the tweet itself. The colour of both block and shape indicate the finding: relevant matches are red; insult, boast and complaint matches without dread are yellow; other results are either blue, non-assessable, or green, inconsequential. The documentation also contains a list identifying the detected emotions, as well as the results of the content analysis that lists matched groupings and the results of the filtration process. Finally a colored summary of the detected finding results is displayed. To the right I have also added a data dump of the findings array. The analysis display is an SVG file, the XML of which is generated by my own Perl scripts. Text glyphs are converted to paths automatically using Inkscape [This is actually the most unsatisfying part of the rendering process, since Inkscape unnecessarily needs to open its GUI to execute an automated process. I would be incredibly happy to know of a less inefficient way to do this if there is one]. #Image Selection Depending upon which topic grouping is dominant, a suitable frame from a movie scene is selected as source material. Each grouping can have an array of attributed scenes, here there are three types of attribution. Shown in the analysis display as standard (+), escalated (++), and specified (+++). The latter is only present if a found match is associated with a specified scene. Currently, the utilized scenes are sequences of the following movies: 12 Years a Slave (2013) # version 0.2 A Cure for Wellness (2016) An Inconvenient Truth (2006) Black Panther I (2018) # version 0.2 Castle Rock - episode 1 (2018) Clemency (2019) Alien Covenant (2017) Dogman (2012) Get Out (2017) Godzilla (1954) Green Room (2015) # version 0.2 Jarhead (2005) Lovecraft Country - episode 1 & 3 (2020) Margin Call (2011) Mississippi Burning (1988) # version 0.2 Parasyte (2019) Punch Drunk Love (2002) # version 0.2 Requiem for a Dream (2000) Shin Godzilla (2016) Silent Running (1972) Terminator II (1984) The Day after Tomorrow (2004) The Host (2006) # version 0.2 The Looming Tower - episodes 8 & 9 (2018) The Stepford Wives (1975) The Wolverine (2013) There Will Be Blood (2007) # version 0.2 Too Old to Die Young - episode 1 (2019) # version 0.2 Videodrome (1983) # version 0.2 Watchmen (2009) World Trade Center (2006) # version 0.2 Under the Skin (2013) # version 0.2 World War Z (2013) More sources may be added in the future. #Linear Gradient Lattices The linear gradient lattices are a basically a woven tapestry of linear gradients. These are generated by reading the RGB values from the source images bitmap (this is done using the GD graphics library). The RGB values are then run through a color recognition algorithm that determines whether a perceivable color change has occurred. If this is the case, the gradient receives a new stop RGB value. This process is highly customizable, I can toggle the color sensitivity as well as the distance between pixels selected for comparison. The Bresenham-Algorithm is used to determine the subsequent pixels of diagonal gradient lines. The linear gradient lattices used for this project are composed of two layers. Layer one is the background layer woven using 4 standard inclines (horizontal, diagonal & vertical). It has a very low resolution and fleshes out the color, while layer two is a far more precise, loose lattice, woven by lines with 8 different inclines, that is placed above layer one. The structure of layer two coalesces around and mimics the octagonal shape that was generated by the content analysis process, which is also displayed above layer two, alongside low opacity remnants of previous shapes that used the same image frame. The latter effect should be quite common in the later parts of 2020. Due to the inefficiency of SVG files in comparison to bitmap image files, the lattices generated in this manner are about 10x as large as the original source image. This is the case despite considerable efforts to minimize file sizes by increasing granularity, as well as the distance between the gradients of the second lattice. Luckily this is not so relevant for my storage and website efficiency since the SVG files can be compressed quite wonderfully if GZIP is enabled. #Archive Structure These archives of version 0.1/0.2 respectively, segmented into monthly sub-archives so that the viewing experience is not too laggy. I used java script as well as JQuery, and Jquery UI to build the archive, as well as a lazy loading java script library for the thumbnails, which are also linear gradient lattices, and thus a bit too large for them all to be loaded at once. #Accumulation Diagrams Each month has its own set of accumulation diagrams that show all shapes generated for each respective grouping. There is also a set of these diagrams planned for each respective year as well as the entire term - yet to be implemented. #