Saturday, January 25, 2020

Compressive Sensing: Performance Comparison of Measurement

Compressive Sensing: Performance Comparison of Measurement Compressive Sensing: A Performance Comparison of Measurement Matrices Y. Arjoune, N. Kaabouch, H. El Ghazi, and A. Tamtaoui AbstractCompressive sensing paradigm involves three main processes: sparse representation, measurement, and sparse recovery process. This theory deals with sparse signals using the fact that most of the real world signals are sparse. Thus, it uses a measurement matrix to sample only the components that best represent the sparse signal. The choice of the measurement matrix affects the success of the sparse recovery process. Hence, the design of an accurate measurement matrix is an important process in compressive sensing. Over the last decades, several measurement matrices have been proposed. Therefore, a detailed review of these measurement matrices and a comparison of their performances is needed. This paper gives an overview on compressive sensing and highlights the process of measurement. Then, proposes a three-level measurement matrix classification and compares the performance of eight measurement matrices after presenting the mathematical model of each matrix. Several experimen ts are performed to compare these measurement matrices using four evaluation metrics which are sparse recovery error, processing time, covariance, and phase transition diagram. Results show that Circulant, Toeplitz, and Partial Hadamard measurement matrices allow fast reconstruction of sparse signals with small recovery errors. Index Terms Compressive sensing, sparse representation, measurement matrix, random matrix, deterministic matrix, sparse recovery. TRADITIONAL data acquisition techniques acquire N samples of a given signal sampled at a rate at least twice the Nyquist rate in order to guarantee perfect signal reconstruction. After data acquisition, data compression is needed to reduce the high number of samples because most of the signals are sparse and need few samples to be represented. This process is time consuming because of the large number of samples acquired. In addition, devices are often not able to store the amount of data generated. Therefore, compressing sensing is necessary to reduce the processing time and the number of samples to be stored. This sensing technique includes data acquisition and data compression in one process. It exploits the sparsity of the signal to recover the original sparse signal from a small set of measurements [1]. A signal is sparse if only a few components of this signal are nonzero.   Compressive sensing has proven itself as a promising solution for high-density signals and has major a pplications ranging from image processing [2] to wireless sensor networks [3-4], spectrum sensing in cognitive radio [5-8], and channel estimation [9-10].   As shown in Fig. 1. compressive sensing involves three main processes: sparse representation, measurement, and sparse recovery process. If signals are not sparse, sparse representation projects the signal on a suitable basis so the signal can be sparse. Examples of sparse representation techniques are Fast Fourier Transform (FFT), Discrete Wavelet Transform (DWT), and Discrete Cosine Transform (DCT) [11]. The measurement process consists of selecting a few measurements,   from the sparse signal that best represents the signal where. Mathematically, this process consists of multiplying the sparse signal by a measurement matrix. This matrix has to have a small mutual coherence or satisfy the Restricted Isometry Property. The sparse recovery process aims at recovering the sparse signal from the few measurements selected in the measurement process given the measurement matrix ÃŽ ¦. Thus, the sparse recovery problem is an undetermined system of linear equations, which has an inf inite number of solutions. However, sparsity of the signal and the small mutual coherence of the measurement matrix ensure a unique solution to this problem, which can be formulated as a linear optimization problem. Several algorithms have been proposed to solve this sparse recovery problem. These algorithms can be classified into three main categories: Convex and Relaxation category [12-14], Greedy category [15-20], and Bayesian category [21-23]. Techniques under the Convex and Relaxation category solve the sparse recovery problem through optimization algorithms such as Gradient Descent and Basis Pursuit. These techniques are complex and have a high recovery time. As an alternative solution to reduce the processing time and speed up the recovery, Greedy techniques have been proposed which build the solution iteratively. Examples of these techniques include Orthogonal Matching Pursuit (OMP) and its derivatives. These Greedy techniques are faster but sometimes inefficient. Bayesian b ased techniques which use a prior knowledge of the sparse signal to recover the original sparse signal can be a good approach to solve sparse recovery problem. Examples of these techniques include Bayesian via Laplace Prior (BSC-LP), Bayesian via Relevance Vector Machine (BSC-RVM), and Bayesian via Belief Propagation (BSC-BP). In general, the existence and the uniqueness of the solution are guaranteed as soon as the measurement matrix used to sample the sparse signal satisfies some criteria. The two well-known criteria are the Mutual Coherence Property (MIP) and the Restricted Isometry Property (RIP) [24]. Therefore, the design of measurement matrices is an important process in compressive sensing. It involves two fundamental steps: 1) selection of a measurement matrix and 2) determination of the number of measurements necessary to sample the sparse signal without losing the information stored in it. A number of measurement matrices have been proposed. These matrices can be classified into two main categories: random and deterministic. Random matrices are generated by identical or independent distributions such as Gaussian, Bernoulli, and random Fourier ensembles. These matrices are of two types: unstructured and structured.  Ã‚   Unstructured type matrices are generated randomly following a given distribution. Example of these matrices include Gaussian, Bernoulli, and Uniform. These matrices are easy to construct and satisfy the RIP with high probability [26]; however, because of the randomness, they present some drawbacks such as high computation and costly hardware implementation [27]. Structured type matrices are generated following a given structure. Examples of matrices of this type include the random partial Fourier and the random partial Hadamard. On the other hand, deterministic matrices are constructed deterministically to have a small mutual coherence or satisfy the RIP. Matrices of this category are of two types: semi-deterministic and full-deterministic. Semi-deterministic type matrices have a deterministic construction that involves the randomness in the process of construction. Example of semi-deterministic type matrices are Toeplitz and Circulant matrices [31]. Full-deterministic type matrices have a pure deterministic construction. Examples of this type measurement matrices include second-order Reed-Muller codes [28], Chirp sensing matrices [29], binary Bose-Chaudhuri-Hocquenghem (BCH) codes [30], and quasi-cyclic low-density parity-check code (QC-LDPC) matrix [32]. Several papers that provide a performance comparison of deterministic and random matrices have been published. For instance, Monajemi et al. [43] describe some semi-deterministic matrices such as Toeplitz and Circulant and show that their phase transition diagrams are similar as those of the random Gaussian matrices. In [11], the authors provide a survey on the applications of compressive sensing, highlight the drawbacks of unstructured random measurement matrices, and they present the advantages of some full-deterministic measurement matrices. In [27], the authors provide a survey on full-deterministic matrices (Chirp, second order Reed-Muller matrices, and Binary BCH matrices) and their comparison with unstructured random matrices (Gaussian, Bernoulli, Uniform matrices). All these papers provide comparisons between two types of matrices of the same category or from two types of two different categories. However, to the best of knowledge, no previous work compared the performances of measurement matrices from the two categories and all types: random unstructured, random structured, semi-deterministic, and full-deterministic. Thus, this paper addresses this gap of knowledge by providing an in depth overview of the measurement process and comparing the performances of eight measurement matrices, two from each type. The rest of this paper is organized as follows. In Section 2, we give the mathematical model behind compressive sensing. In Section 3, we provide a three-level classification of measurement matrices. Section 4 gives the mathematical model of each of the eight measurement matrices. Section 5 describes the experiment setup, defines the evaluation metrics used for the performance comparison, and discusses the experimental results. In section 6, conclusions and perspectives are given. Compressive sensing exploits the sparsity and compresses a k-sparse signal by multiplying it by a measurement matrix where. The resulting vector    is called the measurement vector. If the signal is not sparse, a simple projection of this signal on a suitable basis, can make it sparse i.e. where. The sparse recovery process aims at recovering the sparse signal given the measurement matrix and the vector of measurements. Thus, the sparse recovery problem, which is an undetermined system of linear equations, can be stated as: (1) Where is the, is a sparse signal in the basis , is the measurement matrix, and   is the set of measurements. For the next of this paper, we consider that the signals are sparse i.e. and . The problem (1) then can be written as: (2) This problem is an NP-hard problem; it cannot be solved in practice. Instead, its convex relaxation is considered by replacing the by the . Thus, this sparse recovery problem can be stated as: (3) Where is the -norm, is the k-parse signal, the measurement matrix and is the set of measurements. Having the solution of problem (3) is guaranteed as soon as the measurement matrix has a small mutual coherence or satisfies RIP of order. Definition 1: The coherence measures the maximum correlation between any two columns of the measurement matrix . If is a matrix with normalized column vector , each is of unit length. Then the mutual coherence Constant (MIC) is defined as: (4) Compressive sensing is concerned with matrices that have low coherence, which means that a few samples are required for a perfect recovery of the sparse signal. Definition 2: A measurement matrix satisfies the Restricted Isometry Property if there exist a constant such as: (5) Where is the and is called the Restricted Isometry Constant (RIC) of which should be much smaller than 1. As shown in the Fig .2, measurement matrices can be classified into two main categories: random and deterministic. Matrices of the first category are generated at random, easy to construct, and satisfy the RIP with a high probability. Random matrices are of two types: unstructured and structured. Matrices of the unstructured random type are generated at random following a given distribution. For example, Gaussian, Bernoulli, and Uniform are unstructured random type matrices that are generated following Gaussian, Bernoulli, and Uniform distribution, respectively. Matrices of the second type, structured random, their entries are generated following a given function or specific structure. Then the randomness comes into play by selecting random rows from the generated matrix. Examples of structured random matrices are the Random Partial Fourier and the Random Partial Hadamard matrices. Matrices of the second category, deterministic, are highly desirable because they are constructed deter ministically to satisfy the RIP or to have a small mutual coherence. Deterministic matrices are also of two types: semi-deterministic and full-deterministic. The generation of semi-deterministic type matrices are done in two steps: the first step consists of the generation of the entries of the first column randomly and the second step generates the entries of the rest of the columns of this matrix based on the first column by applying a simple transformation on it such as shifting the element of the first columns. Examples of these matrices include Circulant and Toeplitz matrices [24]. Full-deterministic matrices have a pure deterministic construction. Binary BCH, second-order Reed-Solomon, Chirp sensing, and quasi-cyclic low-density parity-check code (QC-LDPC) matrices are examples of full-deterministic type matrices. Based on the classification provided in the previous section, eight measurement matrices were implemented: two from each category with two from each type. The following matrices were implemented: Gaussian and Bernoulli measurement matrices from the structured random type, random partial Fourier and Hadamard measurement matrices from the unstructured random type, Toeplitz and Circulant measurement matrices from the semi-deterministic type, and finally Chirp and Binary BCH measurement matrices from the full-deterministic type. In the following, the mathematical model of each of these eight measurement matrices is described. A. Random Measurement Matrices Random matrices are generated by identical or independent distributions such as normal, Bernoulli, and random Fourier ensembles. These random matrices are of two types: unstructured and structured measurement random matrices. 1) Unstructured random type matrices Unstructured random type measurement matrices are generated randomly following a given distribution. The generated matrix is of size . Then M rows is randomly selected from N. Examples of this type of matrices include Gaussian, Bernoulli, and Uniform. In this work, we selected the Random Gaussian and Random Bernoulli matrix for the implementation. The mathematical model of each of these two measurement matrices is given below. a) Random Gaussian matrix The entries of a Gaussian matrix are independent and follow a normal distribution with expectation 0 and variance. The probability density function of a normal distribution is: (6) Where is the mean or the expectation of the distribution, is the standard deviation, and is the variance. This random Gaussian matrix satisfies the RIP with probability at least given that the sparsity satisfy the following formula: (7) Where is the sparsity of the signal, is the number of measurements, and is the length of the sparse signal [36]. b) Random Bernoulli matrix A random Bernoulli matrix is a matrix whose entries take the value or with equal probabilities. It, therefore, follows a Bernoulli distribution which has two possible outcomes labeled by n=0 and n=1.   The outcome n=1 occurs with the probability p=1/2 and n=0 occurs with the probability q=1-p=1/2. Thus, the probability density function is: (8) The Random Bernoulli matrix satisfies the RIP with the same probability as the Random Gaussian matrix [36]. 2) Structured Random Type matrices The Gaussian or other unstructured matrices have the disadvantage of being slow; thus, large-scale problems are not practicable with Gaussian or Bernoulli matrices. Even the implementation in term of hardware of an unstructured matrix is more difficult and requires significant space memory space. On the other hand, random structured matrices are generated following a given structure, which reduce the randomness, memory storage, and processing time. Two structured matrices are selected to be implemented in this work: Random Partial Fourier and Partial Hadamard matrix. The mathematical model of each of these two measurement matrices is described below: a) Random Partial Fourier matrix The Discrete Fourier matrix is a matrix whose entry is given by the equation: (9) Where. Random Partial Fourier matrix which consists of choosing random M rows of the Discrete Fourier matrix satisfies the RIP with a probability of at least , if: (10) Where M is the number of measurements, K is the sparsity, and N is the length of the sparse signal [36]. b) Random Partial Hadamard matrix The Hadamard measurement matrix is a matrix whose entries are 1 and -1. The columns of this matrix are orthogonal. Given a matrix H of order n, H is said to be a Hadamard matrix if the transpose of the matrix H is closely related to its inverse. This can be expressed by: (11) Where is the identity matrix, is the transpose of the matrix. The Random Partial Hadamard matrix consists of taking random rows from the Hadamard matrix. This measurement matrix satisfies the RIP with probability at least provided    with and as positive constants, K is the sparsity of the signal, N is its length and M is the number of measurements [35]. B. Deterministic measurement matrices Deterministic measurement matrices are matrices that are designed following a deterministic construction to satisfy the RIP or to have a low mutual coherence. Several deterministic measurement matrices have been proposed to solve the problems of the random matrices. These matrices are of two types as mentioned in the previous section: semi-deterministic and full-deterministic. In the following, we investigate and present matrices from both types in terms of coherence and RIP. 1) Semi-deterministic type matrices To generate a semi-deterministic type measurement matrix, two steps are required. The first step is randomly generating the first columns and the second step is generating the full matrix by applying a simple transformation on the first column such as a rotation to generate each row of the matrix. Examples of matrices of this type are the Circulant and Toeplitz matrices. In the following, the mathematical models of these two measurement matrices are given. a) Circulant matrix For a given vector, its associated circulant matrix whose entry is given by: (11) Where. Thus, Circulant matrix has the following form: C= If we choose a random subset of cardinality, then the partial circulant submatrix that consists of the rows indexed by achieves the RIP with high probability given that: (12) Where is the length of the sparse signal and its sparsity [34]. b) Toeplitz matrix The Toeplitz matrix, which is associated to a vector    whose entry is given by: (13) Where. The Toeplitz matrix is a Circulant matrix with a constant diagonal i.e. .   Thus, the Toeplitz matrix has the following form: T= If we randomly select a subset of cardinality , the Restricted Isometry Constant of the Toeplitz matrix restricted to the rows indexed by the set S satisfies with a high probability provided (14) Where is the sparsity of the signal and is its length [34]. 2) Full-deterministic type matrices Full-deterministic type matrices are matrices that have pure deterministic constructions based on the mutual coherence or on the RIP property. In the following, two examples of deterministic construction of measurements matrices are given which are the Chirp and Binary Bose-Chaudhuri-Hocquenghem (BCH) codes matrices. a) Chirp Sensing Matrices The Chirp Sensing matrices are matrices their columns are given by the chirp signal. A discrete chirp signal of length à °Ã‚ Ã¢â‚¬ËœÃ… ¡ has the form:   Ã‚  Ã‚  Ã‚  Ã‚   (15) The full chirp measurement matrix can be written as: (16) Where is an matrix with columns are given by the chirp signals with a fixed and base frequency   values that vary from 0 to m-1. To illustrate this process, let us assume that and Given , The full chirp matrix is as follows: In order to calculate, the matrices and should be calculated. Using the chirp signal, the entries of these matrices are calculated and given as: ; Thus, we get the chirp measurement matrix as: Given that is a -sparse signal with chirp code measurements and is the length of the chirp code. If (17) then is the unique solution to the sparse recovery algorithms. The complexity of the computation of the chirp measurement matrix is. The main limitation of this matrix is the restriction of the number of measurements to    [29]. b) Binary BCH matrices Let denote as a divisor of for some integer an

Friday, January 17, 2020

The Great Power of Hellsberry!

As the chilled whisper of wind hits the side of my face, I realised how dazzling, and inspiring this village I was in, really was. The moon shone brightly and luxurious in the sky. It was like a slice of cheese sitting there, ready for it to be taken, and eaten up whole. The colour of blues in the mid-night atmosphere, made it look like the deep ocean. Gentle and tranquil, it stood there, for people's eyes to gleam up, with the brightest of tones that it revealed to the world. The stars stood out of place, with its glittering and scintillating sparkle of light, like a Christmas tree ready for the blissful times to come in life. The folks looked up and gasped how astonishing the heavens looked, on this crisp and cold night. The gigantic mountains were covered with white gleaming snow. It lay there, waiting for the hands of children to pick it up, and be thrown at one another. The snow as I picked it up, rubbed against the smoothness of my hands. Making them feel bitterly cold, which sent shivers and tingles right through the tips of my fingers. The black night sky looked very gloomy, as the mist disguised the peak of the mountains. The dull mist made the mountains look very icy and dangerous for anyone bidding to enter this treacherous unknown land. It appeared very ‘alien', as you did not know where the sky ended and where the darkness of the land started. The village folk were kept warm as they had their fires alight. The glow from the fires shone through the windows of every house, which reflected a stream of ripples on the surface of the snow, making tiny little crystals glisten and sparkle, just like diamonds. Lanterns glow by the sides of beds, shinning through the rooms, making the village beam with warmth, against the background of the dreary hours of darkness. The eye of the beholder could see how magnificent the snow really was. There was a sign of some kind that was partly rotted, and had been blown down by fierce winds. It was covered in snow, and on the edges icicles had formed. I wondered to myself what this mysterious piece of wood was. I scraped the snow and ice off with my bare hands, and it said Welcome to Hellsberry! The name of this place sounded very familiar, I knew I had heard it before, but where? Then it came into my mind. An old and wise woman once told me about a cave that was near the heart of Hellsberry. A small and mystical cave that lay hidden away in the mountains, near a lake that was frozen all through the year, even in summer. Dead rotting trees lay helpless around the lake, with their brown crinkled leaves blown away by the cold and ferocious winds. No animals strolled through this desolate place any more. It used to be a beautiful and tranquil place. The valley was green and lush, with brightly coloured and sweet smelling flowers which danced happily to the gentle breeze. Here, all types of animals would come and graze on the long green grass, and laze around under the warmth of the sun shinning above. The deep blue river was plentiful with lots of different sized fish. With its fresh clear water slashing against the side of rocks, making ripples as it gently meandered down hill. The waterfall cascaded over rocks, by the side where tropical flowers grew. Deers would stride up to the lake to drink from it, while birds would be flying around, chirping and singing to each other. I wanted to find out if this story she told me was really true or not. I arrived at the foot of the mountains, which were a dangerous and risky place for any of mankind to face, after it came and made home in the mountains. Even if mankind were the strongest and bravest on earth, nothing could outwit and defeat this creature of wisdom, fire and great power. It was a creature that many of the villagers were afraid of. After it came, the place of so much beauty turned to something dark and dismal. Leaves dropped off rotted trees, animals ran like their enemy was chasing after them, the waterfall turned brown and died, flowers were crushed and damaged; the river froze over as the chilled air passed through it. The creature did this but for how long would it carry on? Every full moon for the last 300 years, as the night grew dark, and the bright elegant stars appeared in the mid-night sky, a ball of raging fire would move about the black night background. A glowing of bright and angry colours made the village come to light as it reflected down on this innocent and helpless place. It grew bigger and brighter every time it appeared in the eyes of the people, and also the people grew more scared every time it was upon them. They had fought bravely against this creature, but failed to keep it away or even destroy it. I was going to change this. The creature lived in the caves upon the white glossy snow of the mountains. The caves ran for miles along the ridge of the mountains, with numerous turns and winding passage ways. Fire lived in the centre of this mountain, it was the biggest and tallest of all. Hell was placed here for no one to find and seek. The cave was cut deep down amongst snow and ice. Around the area of this large mountain, were frequent snow blizzards. This gave the creature vital protection, as no mankind could reach it. I advanced towards this unknown mountain. There was a collage of deep, dark, grey clouds that gradually got bigger and more furious by the minute, with its forceful power waiting to grasp at anybody entering this place of immorality. As I advanced further up the mountain I could feel the ice cold snow against the side of my face. I forced myself through the extreme weather conditions of the blizzard. The snow was dragging me to the centre of hell. The pain of the cold grew within me. The noise of the screeching wind passed through the inside of my red frozen ears, like a thousand animals running from what they are a most afraid of in their lives. I ran towards the jagged shaped mouth in the cave, with my soul being pulled back behind me to the centre of the storm. The snow got heavier and heavier, as I got further to the cave. There I saw a glimmer of light, making the cave light up through the white background. I got further to the middle of hell where this powerful and unpredictable creature awaits. Its life time was about to end. The cave smelled of dead human remains that had been there for centuries, blood was splattered against the sides of the rough toothed rocks. Bones lay in the corner, one on top of each other, I felt more fearful as time went on, and as I discovered more about the secrets behind the cave. There lay a fire in the middle of the walls. I started to feel vibration under my feet. Something big was coming towards me, I did not know what to expect. It appeared from behind me, a tall dark mysterious figure. It moved closer and closer. I moved back, tripping up over a rock, with blood on it. I banged my head. I was on the floor. My head lay there. I couldn't move. The creature moved even closer. Blood poured down the side of my head. I felt faint. A black figure was in front of me. My eyes closed. I thought to myself what was it?

Thursday, January 9, 2020

Can Documentary Films Really Create Change

After seeing a gripping documentary film, it’s not uncommon to feel motivated to take action. But does social change actually occur as a result of a documentary? According to sociologists, documentary films may indeed play a key role in raising awareness of social issues and increasing political mobilization. Key Takeaways: Documentaries and Social Change A team of sociologists sought to investigate whether documentary films can be linked to political and social change.Researchers found that Gasland, and anti-fracking documentary, was linked to increases in discussion about fracking.Gasland was also linked to anti-fracking political mobilizations. Gasland and the Anti-Fracking Movement For a long time, many have assumed that documentary films about issues that affect society are able to motivate people to create change, but this was just an assumption, as there was no hard evidence to show such a connection. However, a 2015 sociology paper tested this theory with empirical research and found that documentary films can in fact motivate conversation around issues, promote political action, and spark social change. A team of researchers, led by Dr. Ion Bogdan Vasi of the University of Iowa, focused on the case of the 2010 film  Gasland—about the negative impacts of drilling for natural gas, or fracking—and its potential connection to the anti-fracking movement in the U.S. For their study published in American Sociological Review, the researchers looked for behaviors consistent with an anti-fracking mindset  around the time period when the film was first released (June 2010), and when it was nominated for an Academy Award (February 2011). They found that web searches for Gasland and social media chatter related to both fracking and the film spiked around those times. Speaking about the study results, Vasi said, In June 2010, the number of searches for Gasland was four times higher than the number of searches for fracking, indicating that the documentary created significant interest in the topic among the general public. Can Documentaries Help Shape the Conversation? The researchers found that attention to fracking on Twitter increased over time and received large bumps (6 and 9 percent respectively) with the films release and its award nomination. They also saw a similar increase in mass media attention to the issue, and by studying newspaper articles, found that the majority of news coverage of fracking also mentioned the film in June 2010 and January 2011. Documentaries and Political Action The researchers found a clear connection between screenings of  Gasland  and anti-fracking actions like protests, demonstrations, and civil disobedience in communities where screenings took place. These anti-fracking actions—what sociologists call mobilizations—helped fuel policy changes related to fracking the Marcellus Shale (a region that spans Pennsylvania, Ohio, New York, and West Virginia). Implications for Social Movements Ultimately, the study shows that a documentary film associated with a social movement—or perhaps another kind of cultural product like art or music—can have real effects at both national and local levels. In this particular case, the researchers found that the film  Gasland  had the effect of changing how the conversation around fracking was framed, from one that suggested that the practice is safe, to one that focused on the risks associated with it. This is an important finding because it suggests that documentary films (and maybe cultural products generally) can serve as important tools for social and political change. This fact could have a real impact on willingness of investors and foundations that award grants to support documentary filmmakers. This knowledge about documentary films, and the possibility of increased support for them, could lead to a rise in the production, prominence, and circulation of them. Its possible that this could also have an impact on funding for investigative journalism—a practice that has mostly fallen away as re-reporting and entertainment-focused news has skyrocketed over the last couple of decades. In the written report about the study, the researchers concluded by encouraging others to study the connections between documentary films and social movements. They suggest that there may be important lessons learned for filmmakers and activists alike by understanding why some films fail to catalyze social action while others succeed. References Diedrich, Sara. â€Å"The Power of Film.† University of Iowa: Department of Sociology and Criminology, 2 Sep. 2015. https://clas.uiowa.edu/sociology/newsletter/power-filmVasi, Ion Bogdan, et al. ‘No Fracking Way!’ Documentary Film, Discursive Opportunity, and Local Opposition Against Hydraulic Fracturing in the United States, 2010 to 2013.  American Sociological Review, vol.  80, no. 5, 2015, pp. 934-959. https://doi.org/10.1177/0003122415598534