Mailalytics looks at subscriber engagement at three levels: per member, message frequency, and conversation (thread) length. These three metrics can provide a lot of useful insight. By looking at the number of people who only send new messages but never reply, we can see how many folks are sharing but not really discussing. By looking at the number of original (non-reply) messages versus replies, we can tell how much actual conversation is going on on the list (as opposed to “check this out” or “here’s a job posting”, etc). By looking at the total number of messages in a thread, it’ll give us some rough estimate of the quality of the conversation (generally, the better the convo, the more replies in it). By looking at the number of people who are only sending one message, versus the number who are sending five or more, we can see how many people are regularly active within the community versus those who may listen frequently but only participate on occasion.
Checkout this Case Study: Philly Startup Leaders Engagement Dashboard, based upon mailalytics results.
Of these three metrics, Cody was responsible for the functionality of member-level statistics and message frequency statistics. He was also responsible for DRYing the code and creating the basic library API and command-line scripts. (Cody loves data-driven business and enjoyed applying it to PSL itself.)
Posted in .
By codyaray
– April 6, 2011
The Agent Technology Center at Czech Technical University (CTU) developed a highly-regarded intelligent agent framework and agent simulator AGLOBE. However, AGLOBE’s communication model used in simulation was boolean; that is, it assumed perfect connection or no connection. While this is better than treating the network environment as a perfectly reliable black-box, it is still a poor approximation for many dynamic network environments and could potentially lead to major failures when testing distributed computing or distributing AI algorithms in the field. While working at CTU in Prague, I developed a shared library intended to provide new and existing agent simulators with facilities for approximating wireless multi-hop communications on mobile ad-hoc networks. Specifically, WAMAS provided a set of communication models and a domain-speciï¬c language for manipulating and hooking itself into existing agent simulators. WAMAS is built in the spirit of (and with some code from) the MATES application-layer simulator (developed by a colleague), approximating the lower level networking processes to provide agent simulators with a better model of agent communications on constrained mobile ad-hoc networks.
WAMAS was designed to model wireless networks; the primary wireless properties considered were transmit power decay, ï¬nite bandwidth and throughput, and network latency. WAMAS is based upon four core models: improved versions of the link connectivity and data transport models found in MATES as well as new models for media access control and ad-hoc routing. Each model can be thought of as conglomerating and approximating an associated “block†of the OSI model.
An exact connectivity model accounts for link quality degradation due to transmit power decay. Sharing the finite bandwidth to simultaneously transmit multiple packets can be considered a very rudimentary approximation of frequency division multiple access bandwidth allocation. Dijkstra was used as a zeroth-order approximation of introducing a multihop ad-hoc routing into the network. A data transport model was used to define the amount of time (simulator iterations) required for an entity to be sent over a specific link (usually a function of the link quality and the size of the entity).
Once the models were constructed and the standalone WAMAS library was complete, it was integrated into AGLOBE as an alternative replacement for the existing boolean communication system. If used in future studies, the research into communication, distributed computing and distributed artificial intelligence will now have a better approximation for realistic wireless communications, improving the quality of results obtained using AGLOBE for agent research (or any agent simulator integrating the WAMAS library).
Posted in .
By codyaray
– April 6, 2011
As a research assistant at the Applied Communications and Information Networking Center, I saw a lot of applications that involved streaming media over multicast in mobile ad-hoc networks. The developers of each application were attempting to independently secure their traffic, which is both inefficient in terms of development and could also lead to more security holes. In response, my job was to prototype a transparent network communications security service for multicast applications using pre-distributed keys. This system “intercepts” incoming and outgoing traffic to specific addresses while the packets are still being processed in kernel-space; if the packet is incoming and destined to the current host, it is decrypted before being propagated up to the application layer, and if the packet is outgoing, it is encrypted before sending to the lower layers. In this manner, applications don’t have to deal with (as many) security issues, and encryption can be done identically across a group of applications or the entire system. This system uses netfilter queue for packet filtering and mangling, and the cryptographic facilities of openssl’s libcrypto. The communication channels were selected by binding the address/port of the receiver (e.g., multicast addresses) to particular queues using iptables.
This software was written in C. As the solo developer on this project, I took it as a learning opportunity for exploring open source best practices. The build environment was based on Automake and Autoconf. User interface messages were internationalized using Gettext, man pages were generated using help2man, info pages were generated from .texi files, and source code documentation was generated using Doxygen from the embedded comments. Gnulib was used to share common files. The software used Gnu style switches for easy command-line use, a signal handler to ensure a smooth exit, and could also be run as a daemon. The system was packaged for redhat and debian. Best practices for  documentation were also adopted (such as including a KEYS file with my PGP key).
Posted in .
By codyaray
– April 6, 2011
Localization of a mobile robot using RF information, important in indoor environments or other environments where GPS is not available but RF signals may be sensed. We were provided a data set consisting of RSSI information acquired from 4 fixed routers as a robot moves in an open (LoS) environment. The robot has two network cards, hence there are two different sensors for reading the RSSI information. The spreadsheets breaks this information out by timestamp and include an estimate of the robot’s location based on its odometer readings. In several cases there is an analytic computation of the robot’s actual location that compensates for drift in the odometer readings (this was computed by looking at the robot’s location on the floor). In some experiments the robot moves in a straight line, in others is does not. In some experiments the robot moves with constant velocity, in others it stops and pauses in spots. These motion patterns are evident in the data.
My approach was to ï¬t a path loss model to the experimental Received Signal Strength Indicator (RSSI) data, obtain a maximum-likelihood (ML) position estimate by atomic multilateration with the four ï¬xed WiFi routers, and use Kalman ï¬ltering to fuse the odometry measurements and ML RSSI estimates. Gnuplot was used to fit the experimental data to the path loss model using the least squares method and obtain the set of coefficients for each interface/router pair. A maximum-likelihood estimate was derived as the minimum mean squared estimate of a system of (N-1) equations, where there were N=4 routers in this scenario. MATLAB was used to perform these matrix computations. The objective of using Kalman filtering was to improve the robot’s odometry-based location estimate (smooth but drifts as errors accumulate) using RSSI information (noisy but nearby actual position) acquired from the four ï¬xed WiFi routers. To use discrete-time Kalman filtering, I derived a model of the robot’s dynamics, approximating a two-dimensional continuous Wiener process acceleration model. We compared the results of sensor fusion at two-levels: measurement fusion (three RSSI measurements and odometry fused) and MLE fusion (RSSI-based MLE estimate was fused with odometry).
Cody completed this project independently as part of an advanced artificial intelligence course (receiving an A for the project) using a mixture of MATLAB, SQL (MySQL), shell scripting, awk, and gnuplot.
Posted in .
By codyaray
– April 6, 2011
This laboratory introduced a practical application where sinusoidal signals were used to transmit information: a touch-tone dialer. Bandpass FIR ï¬lters were used to extract the information encoded in the waveforms. The goal of this project was to design and implement bandpass FIR ï¬lters in MATLAB, and to do the decoding automatically. Specifically, we developed a MATLAB program to encode and decode the dual-tone multiple-frequency (DTMF) signals used to dial a telephone. Code is available on GitHub.
Continued…
Posted in .
By codyaray
– September 16, 2010
A study of audio steganography with emphasis on psychoacoustic approaches. The project had the requirement of hiding a text-based message inside an audio signal with minimal or no distortion of the signal as perceived by the human ear. Three approaches were employed: least-significant bit method, time-domain amplitude modulation, and the psychoacoustic model from MPEG-1. The results are audio files, and thus quantify without doing a human participant study, for which class time didn’t allow. Cody developed this project independently in MATLAB as part of a Digital Signal Processing / Psychoacoustics course. The theory and experimental results of each approach are discussed. The code is available on GitHub.
Continued…
Posted in .
By codyaray
– September 16, 2010
A text-independent speaker verification system based upon classiï¬cation of Mel-Frequency Cepstral Coefficients (MFCC) using a minimum-distance classifier and a Gaussian Mixture Model (GMM) Log-Likelihood Ratio (LLR) classifier.  The speaker recognition system was implemented in MATLAB using training data and test data stored in WAV files. I developed custom matching and testing routines based upon minimum distance classification, extracted feature vectors using the melcepst function from Voicebox toolkit, and used an open source GMM library. For testing, I used 8 speakers (4 male, 4 female) from the popular TIMIT speaker database each saying two phonetically-diverse sentences. One sentence was used for training and the other for testing. Manually training the threshold for the minimum-distance classifier resulted in a 91% classification accuracy. Cody developed this project independently as part of a Digital Signal Processing course. A presentation and report follow. The code is available on GitHub.
Continued…
Posted in .
By codyaray
– September 15, 2010
A simple program that creates word search puzzles (assignment, mirror) using uniformed and backtracking search algorithms. The program was implemented as an AI production system, developed by Cody for CS 510 Artificial Intelligence, and the code is available on Github.
Posted in .
By codyaray
– September 15, 2010
An image printing program using halftoning. A brief write-up is below and the code is available on Github. Continued…
Posted in .
By codyaray
– September 15, 2010
A basic object recognition system was developed using Fourier descriptors and minimum-distance classification from a stored object template using MATLAB. The essential idea was to represent the coordinates as complex numbers for computing the Fourier descriptors of the image, then reduce the descriptors using a low-pass filter which will remove the high-frequency content (ie, the detail) and keep the basic shape. The basic shape could then be compared to stored templates and classified using, in this case, a minimum distance classifier. Cody developed the system independently as part of an image processing class. Presentation slides are below.
Continued…
Posted in .
By codyaray
– September 15, 2010
Recent Comments