Monday, March 13, 2017

[dwvcvceg] Colors

We explore the range of RGB colors (the sRGB gamut) projected into CIE L*a*b* (CIElab, Lab) color space, as implemented in the Haskell colour package.

The first image below is a slice of the CIE L*a*b* color solid through the plane L = 50, halfway between 0 (black) to 100 (white). The "a" and "b" coordinates have range ±128. The shape is slightly different from -- the bottom is slightly narrower than -- the image of the slice at L=50 on Wikipedia, so I wonder what is going on. It might be the choice of white point. We chose white_point=Data.Colour.CIE.Illuminant.d65 for everything in this discussion.

(Most of the images on this page have been scaled down with HTML. Right click and View Image to embiggen.)

cielab lightness 50

The inside of the cross section is kind of boring being mostly grayish, so in the image below we fill the cross section by connecting the origin (a=0, b=0) with line segments of the color at edge. We do this because we are mostly interested in the extent (outer edge) of the sRGB gamut in CIE L*a*b*. And it makes the pictures more striking.

cielab lightness 50

Finding the edge of the cross section, i.e., the last representable color still within the sRGB gamut, was an exercise in root finding.

Below are 13 other slices colored similarly. The lightness values are as follows.

  1. Lightness 16.149085003423885, halfway between black and the lightness of fully saturated blue.
  2. Lightness 32.29817000684777, the lightness of fully saturated blue.
  3. Lightness 42.767524268618665, halfway between blue and red.
  4. Lightness 53.23687853038956, the lightness of fully saturated red. We switch to a black background as colors approach white in order to better be able to see the edge of the shape.
  5. Lightness 56.779166162118884, halfway between red and magenta.
  6. Lightness 60.32145379384821, the lightness of fully saturated magenta.
  7. Lightness 74.02883222725823, halfway between magenta and green.
  8. Lightness 87.73621066066826, the lightness of fully saturated green.
  9. Lightness 89.42553101260378, halfway between green and cyan.
  10. Lightness 91.11485136453929, the lightness of fully saturated cyan.
  11. Lightness 94.12695165179927, halfway between cyan and yellow.
  12. Lightness 97.13905193905926, the lightness of fully saturated yellow.
  13. Lightness 98.56952596952962, halfway between yellow and white.

It might be fun to 3D print or or render the solid with 3D graphics someday. It seems to have a complicated shape. For 3D graphics, it would be most natural for the rendered color of the solid at each surface point to be the actual color of the solid, incorporating no reflected lights or shadows. However, such a lighting model will probably prevent the solid's sharp edges from being easily visible.

Above, we presented only 13 slices of the CIE L*a*b* color space. The first image below depicts the outer edge colors of 1024 slices. The vertical axis is lightness (L). The horizontal axis is the angle from the a-b origin. On my monitor, there are curious ridges corresponding to the saturated colors. I suspect it has to do with gamma.

However, the appeal of the CIE L*a*b* color space is perceptual uniformity; that is, perceptual differences in color can be calculated by Euclidean distance. The second image above has each row individually rescaled for perceptural uniformity. In other words, the horizontal axis is the proportion of the perimeter of cross section.

Marching along the perimeter of the cross section was another exercise in root finding. At each step, we seek the next point on the perimeter a constant distance away (and remember that finding any point on the perimeter itself requires root finding). Because we don't know the perimeter of a cross section in advance, we arbitrarily choose a small step size, achieving a set of points separated by that step size (except from the final point to initial point), then crudely rescale the those points into 1024 steps.

The image above on the right was the "magnum opus" of this project, taking days to compute. Here is a raw data file of the (a,b) coordinates of the perimeter at 1023 levels of lightness. Some combination of Implicit Differentiation and Automatic Differentiation might have computed this more efficiently.

We can take any row of this image to extract a band of uniform lightness and uniform rate of color change. Below on the top row is two copies of the band at L = 53.3203125, the lightness with the longest perimeter. This happens to be very close to the lightness of pure red. On the bottom row is the same band shifted 25 pixels. The color distance between the rows is roughly constant, so ideally there should be equally sharp contrast along the entire boundary. (But on my monitor this appears not to be the case: we will explore this more further below.)

We can sample this band at multiples of phi (the golden ratio) to get an infinite palette of colors widely spaced from each other, all at the same lightness.

Palette entries 0, 5, 8, 13 are similar because the Fibonacci sequence approximates the golden ratio.

For a fixed size palette, one can probably do slightly better by particle repulsion on the cross section itself, though I have not implemented this.

Next, abandon the constraint of equal lightness and instead focus on the saturated RGB colors. The outline of next image projects has only the saturated RGB colors projected orthogonally to the a-b plane. The edge colors are then radially connected to the origin as before. Someday, it might be fun to render this in 3D as a minimal surface.

saturated RGB colors projected to the a-b plane of the CIE Lab color space

I discovered that the appearance of the above image on my LCD display radically changes depending on the position of my head: the width of the colors changes. (CRTs I suspect do not have this problem.) The image below may better illustrate the effect. Move your head up and down (or left and right) and notice how (or if) the color sectors change in width. I especially notice it in the blues. Also, here is a webpage with the same circles tiling the background.

saturated RGB colors projected to the ab plane of the CIE Lab color space

The first image below shows the saturated colors scaled in distance for perceptual uniformity. The second image is without correction, a typical color palette moving in RGB space at constant speed, using all the saturated colors of the rainbow.


The upper image below gives the same perceptually uniform rainbow except extended (looped) a bit to better see the region around red. The lower image is the same, except shifted by 20 pixels. The color distance between the rows is roughly constant, so ideally there should be a boundary line of constant contrast across the whole width. On my monitor, this appears not to be the case: the rows blend in the red-magenta area. As before, on LCD displays, the contrast may depend on the viewing angle.

rgb(55,0,255)rgb(147,0,255)

The above two colors are separated by a distance of 18.9955 according to this online color distance calculator, whose results are in close (but not exact) agreement with my code. On my monitor, the colors appear quite different.

rgb(255,0,69)rgb(255,0,33)

The above two colors are separated by a distance of 18.89. On my monitor, they appear similar.

rgb(0,255,44)rgb(0,255,100)

The above two colors are separated by a distance of 19.108. On my monitor, they appear similar.

Based on the above examples, I'm less than convinced that L*a*b* space is good for defining perceptual color distance. Or, my monitor is bad at displaying colors.

Here is the program used to generate the images, and alternate download location for the inline images.

Saturday, March 11, 2017

[jhevkhhn] 3x4 byte

Consider a 3x4 block of pixels with 4 of the pixels (probably corners) set to a constant pattern (probably 3 on and 1 off) to fix orientation.  The other 8 can encode 8 bits or 1 byte.

[okuacmxo] New names for old planets

Now that we know more about the planets in our solar system, pick some new names for them.

[duafaprv] Argon2i versus Argon2d

Use Argon2i when hashing passwords.  It provides resistance against side channel attacks with which eavesdroppers could discover the password while it is being hashed.

Use Argon2d when hashing non-secrets, most famously for proof-of-work systems like cryptocurrencies (e.g., Bitcoin) and Hashcash.  It provides resistance against ASIC and GPU attacks.

The tricky case is when there are threats of both side-channel and ASIC attacks (arguably any attack against a hashed password).  The conservative approach is simply Argon2i with more memory and more rounds.  The more risky approach is the less-cryptanalyzed Argon2id.

[wwoznhxd] Compressing Blue Marble Next Generation

The 12 months of the Blue Marble Next Generation image set offer a nice large fixed benchmark target for image compression.  There is correlation between the months which can be exploited.

Previously.

Wednesday, March 08, 2017

[paocbhaj] Electron positron

Place an electron and positron near each other, and they will attract, meet, and release a photon or photons as they annihilate each other.

If they are point particles, how do they manage to meet?  How do they shed their excess angular momentum in order to meet?  What happens to the energy released in descending the infinitely deep potential well?  (There's infinite energy there, so it's surprising a electron-positron accelerator is even needed to provide the energy to create new particles.)

The answers are probably "uncertainty principle".

Monday, March 06, 2017

[cyacdcjd] Irregular resampling

Given an irregularly arranged collection of pixels (point samples), interpolate the value of a point between betwen them.  This has probably already been solved.

Easiest is nearest-neighbor.  There are adventures to be had with clever data structures and algorithms to find the nearest neighbor.  The region nearest each pixel is a Voronoi cell.  On a sphere manifold, this problem comes up in converting between HEALpix and Tegmark icosahedron.

Next easiest is to compute the Delaunay triangulation (interestingly, the dual of Voronoi) of the pixels, then use a plane constructed on each triangle to interpolate values.

What is the next order interpolation?  Probably something closer to Voronoi again.

For a square of grid pixels, the famous ones are bilinear and bicubic interpolation.

[zkehinlc] Why can't we all just get along?

Although Rodney King did not strictly ask "why", it is a very deep, very important question.  Previously: (1), and (2).

Can the answer be crowdsourced?  Lots of people contributing portions of the answer, perhaps anecdotes of people getting along and not.

We imagine a support vector machine then deriving the margin.

[atddpaxz] Politically drawn districts

Drawing electoral districts so that they are compact (e.g., algorithmic redistricting such as bdistricting.com), avoiding gerrymandering, tends to concentrate poor people into a small number of districts because the poor tend to live in high density neighborhoods.  (Previously mentioned.)  The highest density neighborhoods around the world tend to be slums or housing projects.

In contrast, if a political party whose political base is the poor is in power, they will draw districts so that the poor have a just barely majority in many of them (but winner takes all per district) and consequently dividing and diluting the political influence of their opponents, the rich.

Of course, if a political party whose political base is the rich is in power, things will be reversed.

Which political situation is more likely?

On one hand, the poor constitute the majority of the population, so it seems likely their political party would usually have power.  Power would thus usually rest with the majority, so democracy is functioning as designed: tyranny of the minority by the majority will happen, a known hard problem with democracy.  Should this hard problem be addressed by regulating the districting algorithm or process?

On the other hand, the rich have tremendous power to influence the political process by financing campaigns and paying for propaganda, so it is also plausible that the political party of the rich would usually have power.  However, if this is true, it seems unwise to try to fix the corruptive influence of money on politics by changing the political districting algorithm or process.  Better would be campaign finance regulations and a political discourse that helps good speech win over bad speech (deceptive propaganda).

Inspired by Amartya Sen, perhaps the purpose of democracy is not to yield great outcomes but to simply avoid extremely bad outcomes, e.g., famine.  It seems that democracy withstands corruption in times of crisis: we can imagine a scenario in which the rich would prefer to let the poor die of famine rather than do income redistribution to aid them, and so the rich apply their financial might to keep or gain political power to achieve this outcome, but their efforts are in vain, because democracy seems to prevent this.  What kind of districting allows democracy to continue to function this way?

Compact districts placing the poor into a few districts with high density of poor will decrease the political strength of the poor, who will most likely be the first and hardest hit by crises like famine.

Sunday, March 05, 2017

[rifctyln] Table of images

<table cellspacing="0" cellpadding="0"> <tr> <td> <img style="display:block" ... /> </td> ... </tr> ... </table>

Note that making img a block element means it'll add a newline like a paragraph, so no more than one image (horizontally) per TD.

Previously demonstrated here.

https://www.emailonacid.com/blog/article/email-development/12_fixes_for_the_image_spacing_in_html_emails

Interestingly, before CSS, HTML tables were the way to do precise layout; nowadays due to newer default DOCTYPE's, CSS is ironically required to do it this old-school way.

cellspacing and cellpadding go away in HTML5, so CSS "border-spacing" needs to be applied to the table and "padding" to each(!) TD.

Wednesday, March 01, 2017

[grvnthvg] Bloom filter to serialize a community

Put a collection of strings into a Bloom filter.  It is interesting that the resulting data structure is independent of the ordering of the input strings and does not internally use an ordering of the strings, where "use an ordering" is left somewhat vague.  What other data structures have these properties?

Original motivation was to create a string representing a collection of names such that no name is made more important (e.g., earlier in order) than another one.  Kind of a digital equivalent of a group photo.

[earbxlnq] Up and down motion sickness

Does motion sickness occur when moving only up and down? (Perhaps a virtual illusion of vertical motion.)  Do the inner ears' sense of balance get invoked when all that is changing is the vertical g force?

[qgmccsry] Short foreign words for long native words

Identify words or phrases in a language which are significantly shorter than the equivalent word or phrase in another language.  Import the shorter into the language with the longer.

Sunday, February 26, 2017

[ryqubypz] Removing letters from the alphabet

Pairing some letters by phonetic similarity (usually voiced versus unvoiced consonants, inspired by Japanese hiragana) allows removing some letters from a small keyboard, adding one more key to signify using the alternate letter.

bp cj dt fv gk hw lr mn qx sz a e i o u y

Previously.

[grputrtr] Passenger only aircraft

Consider flying every flight as two planes: one for the people and another for the luggage.  This decreases the harm of bombs being placed in luggage.

Modify a plane to carry more people, no luggage.  Probably double decker.

[utbdzflf] Serving people who hate their jobs

An employee hates his or her job.  This makes unpleasant life for those who work with, or for, that employee.  That unpleasantness causes them to hate their jobs too.  This then causes feedback to the first employee who hates his or her job even more because of the unhappy people he or she has to work with.

How much does this happen?  Can the cycle be broken?

[vizxbugj] Garbage collect before OOM killing

Before the operating system starts killing processes when it is out of memory, consider signalling processes (somehow) to run their garbage collection now and release their freed memory back to the OS, assuming they are written in a garbage collected language.  Need some standardized method of signalling.

Instead of every process having its own private garbage collector, provide it as an operating system service.  This seems tricky, as each program may encode references differently.

Also consider signalling processes to do garbage collection when performance has gotten bad due to frequent swapping to virtual memory.  This might not be too effective because stuff sitting around waiting to be garbage collected probably sits quietly swapped out.  Doing garbage collection on them might even make things worse.

[fzozfmue] House burglary and gun ownership

Are house burglars more prevalent and more bold in areas with stricter gun control?  Need to control for the many other factors which could affect house burglars.

Inspired by the lock snapping vulnerability discovered and being frequently exploited in Britain, whereas many American home door locks are even less secure.

[hhqmupvg] Light and dark punctuation

Punctuation can roughly be divided into two categories.

Light punctuation can replace space as a token separator: hyphen/dash.  Underbar in computer code.  Colon in time.  Period in DNS.

Heavy punctuation usually has a space on at least one side.

[jicgchtx] Cavalier and variations

The cavalier is a fairy chess piece that improves on the gryphon by correcting the gryphon's asymmetric retreat: the gryphon cannot exit the way it arrived.

(Is the cavalier actually an improvement?  Or does this peculiar feature of the gryphon make the game more interesting?)

The cavalier's move can be either a ferz move followed by an outward rook move or a rook move followed by an outward ferz.  The two possible paths prevent the asymmetric blocking possible with the gryphon.

The cavalier's range is slightly different from the gryphon because in its rook move, it must step at least one square.  However, a cavalier ferz compound does have the same range as a gryphon.

In terms of range, cavalier = gryphon - ferz.  We can also consider similar subtractions to sliding pieces: a rook minus wazir, bishop minus ferz, queen minus either or them, or queen minus king.  The intervening (subtracted) square must be empty.  If we want jumping, we can add that back in with alfil or dabbaba.  Jump and slide on the same move remains impossible though.  If it were possible, we'd have asymmetric retreat problems again.

These "minus sliders" are nice components to make compound pieces out of.  The abilities are disjoint.

A piece that makes one ferz then one wazir move, in either order, both outward, has the same range as a knight but requires at least one of the intervening squares to be empty.

[izyttmmm] Point particle pool

Instead of billiard balls being spheres undergoing perfectly elastic collisions, create a new game, probably virtual, in which the collisions are of point particles obeying an inverse square law of repulsion or attraction.

Incidentally, when distances are relatively large and all particles repel, the system behaves like billiard balls.

How should motion stop, providing a static state for the next player's move?  The particles never stop exerting force on each other.  Easiest is for time to stop (perhaps at a time point controlled by the player), at which point all velocities are set to zero.  On the next move, the player gets to set the velocity of the cue particle (ball) and start time moving forward again.

Much more ambitious: simulate electron and position collisions as in a particle accelerator: new particles can be created.  The inverse square law only applies at large distances.

Friday, February 24, 2017

[ralkzndf] Safe space and the War on Drugs

We clarify the parallels between "safe space", "consent culture", and the War on Drugs, responding to this comment on this old post:

Assuming the rules for "safe space" and "consent culture" are accompanied with punishments for breaking those rules, my suspicion is that those who will be on the receiving end of such punishments will again be the same minorities and marginalized classes disproportionately on the receiving end of punishments in the War on Drugs.

Two parts of the connection:

These marginalized groups are the ones that the privileged portion of society wants to see punished to the max.  They are "Them" (in Us versus Them); they are the Other.  Society (the privileged portion) does not ask for leniency when They are up for punishment like it asks for (and usually gets) when One Of Us is up for punishment.

Within these marginalized groups is a disproportionately large subset that society likes to deny opportunity, whether opportunities for legitimate jobs outside the illegal drug industry, or opportunities for social, romantic, or sexual interaction: "I don't want to be around this person."  Integral to a safe space violation or consent violation is someone saying "no", and many people choose to say "no" based on racial and social class prejudices.

[htnrbrdd] Expressive other hand

Hold a smart phone in one hand.  The other hand and arm is free to do all kinds of expressive motions: so much more than sliding a finger around a touch screen display.  The problem is how a sensor can detect those actions.

Inspired by musical conducting.

[hrzgkqno] Music bridging the culture gap

We have a very divided society where the sides are failing to communicate with each other.  Can music -- or more broadly, art -- bridge this communication gap, explaining one side's {position, ideas, culture} in a way the other side can relate?

Perhaps yes: this is the kind of thing which could be helped along by philanthropy, commissioning art with this goal of building bridges.  It is up to the artist the perhaps unenviable grunt work of learning "the other side" then perhaps the unenviable grunt work of figuring out how to communicate it to "our side".

Perhaps not: people consume the entertainment they like, that they agree with.  People hear in music what they want to hear, which might be different from what the musician is trying to say.

It seems highly likely this has already been tried, or done.  How has such music or art been received?

[stqcijwh] Quipu

Create a system of recording information using knots on a string, or multiple strings.  If desirable, reuse any ideas from Incan quipus, though most of that knowledge is lost.

Knots are neat because information is being recorded in topology, which is different from most other methods of recording information.  Consider how difficult it is to turn a knot into not-a-knot, or turning a left-handed knot into a right-handed one.  Compare that to the ease of altering information in other media: it typically requires only a local change.

Modern technology can create extremely durable fibers for long term preservation of information: which fibers should be used?  Knotting puts stress on a string.

Should the nature of the knot matter, e.g., size, type, or should it just be binary of whether there is a knot at a position or not?  A special type of string could be amenable to X-ray CAT scans which could decode the internal structure of a knot.  Specially colored fibers could allow 3D computer vision techniques to derive the type of knot just by examining its outer surface.

Modern technology can probably make knots at very precise locations on a string.

Intriguing is having the knots on loops of string as knot theory does it instead of straight segments.  We need some way of fusing the ends of a string (without a knot) -- many such ways exist -- or directly creating loops of string with knots on them (e.g., 3D printing).

Knots have an attractive feature that they can be read by touch, e.g. in low light environments or with failing eyesight.

[elbejrwh] Clouds as local key

Take a picture of some clouds outside and robustly derive a cryptographic key from them.  Broadcast some information encrypted with this key.

Anyone in the vicinity can take their own picture of the clouds, derive the same key, and decrypt.  Time passes, the clouds change shape, and the information becomes impossible to decrypt: it was ephemeral.  Attackers far away also cannot see the clouds.

How can one robustly derive the same key from pictures of clouds taken at slightly different points in time from slightly different vantage points?  The sky is big: how can the sender and receivers agree on which cloud?  Can weather satellites constantly recording every cloud defeat this?

What would this technology be useful for?

Are there other things like clouds which can serve as the basis for this kind of system?  We could artificially generate such a signal (previously).

Thursday, February 23, 2017

[lkzruaxs] Only high compression matters

One tends to use lossless data compression (text compression) on data that can be compressed a lot, for example log files or trace files.  For small amounts of compression, the slightly decreased bandwidth or increased storage is not worth the inconvenience.

What is the threshold? Maybe around 5:1, or compression to 20% of the original size.  Include many input data which compress that much or more in a compression benchmark.

Incidentally, most lossy compression of media hits that ratio or better.

Another category of useful compression might be compression that is fast enough to be transparent.  This will depend on the bandwidth of other parts of a pipeline that transfers and uses data.

Monday, February 20, 2017

[uyyrhizz] IDE type annotations

Ideas for a desirable feature of a Haskell IDE:

Good: IDE pops up the type signature of the library function or symbol under the point.  Emacs haskell-mode can do this.

Better: IDE is aware of static scoping, let binding, and imports to really know what function you are referring to.  However, if you forgot to import, it still tries to be helpful, guessing at a library function and offering its signature as well as a reminder that you need to import it.

Better: If the function does not have an explicit type signature, the IDE does type inference to figure it out.

Better: if the type is polymorphic, the IDE also provides the type of the function as instantiated where it is used, instead of just the polymorphic type where it was declared.

Wednesday, February 15, 2017

[jjlddkmc] Pick your own house lock

Instead of leaving a concealed key outside the house for when one accidentally locks oneself out, leave lockpicks concealed outside the house.  This is less disastrous if someone bad discovers it.

Leaving the lockpicks at your house avoids needing to carry them on your person, possibly avoiding legal issues in jurisdictions where possession is considered proof of intent.

Of course, learn to pick your own house lock first.  This is aided by American tradition of having terribly insecure door locks in many places.

[lqdaenrs] Fun games of no strategy

Create a game which has no strategy (though it might present the illusion of requiring strategy) but which is still enjoyable to play.  One way is each time playing results in novel occurrences, perhaps through where simple rules can interact in interesting, beautiful, and complicated ways.  A randomizer can ensure new regions of the state space get explored each time.

[dptxehvj] Face CAPTCHA

Humans can presumably read faces, gauging emotion, better than computers.  This could be the basis for a CAPTCHA similar in style to reCAPTCHA: a bunch of pictures of faces that people form consensus on what emotion it is conveying.

[uxnvpywt] Free speech as a scapegoat

Enumerate historical examples in which free speech was blamed for a problem, speech was curtailed, and the problem persisted, thereby proving that speech was incorrectly blamed.

[jzvhfndr] Tor for C2

Brian Krebs reports that the command and control server for botnet was hosted by an ISP in Ukraine, and complaining up the ISP tree eventually knocked it offline.  However, such a server seems to be the perfect use case for a Tor hidden service.  Why was it not done?

Monday, February 13, 2017

[crhnyxty] Competent smuggler

In an alternate universe, Luke and Ben hire a different smuggler to transport themselves to Alderaan.  Perhaps someone more competent, less flying by the seat of his pants, who demonstrates a spectacular array of clever tricks to "avoid Imperial entanglements".  (Most of the time, Han, Chewbacca, and the Millennium Falcon rely only on speed and gunfire.)

[rnsmkgjj] Matter disappearing

Astronomy has a few nice examples of large quantities of matter turning into energy.

Of course, stellar nuclear fusion.

A supernova converts a large amount of the progenitor star into neutrinos.  While technically not massless, because neutrinos interact so little with anything, it seems like mass just disappeared.

In a black hole merger, a significant amount of mass gets radiated away as gravitational waves.

[lkajhfjz] Great glass elevator

It seems relatively easy to simulate in virtual reality the visual experience of traveling straight up and down.  There are many ways to do it, from projecting a fully real environment (previously, tower) to traveling in a completely synthesized world.

Consider altering the stereo depth, the distance between the eyes, at different heights.

Why do you want to be an astronaut?

[megjnlno] Copyright causes history not to be recorded

Artists have incentive not to record what influenced or inspired their artistic works because of fear of copyright lawsuits from those they cite.  But such documentation would be useful for historians.

This documentation might be significant beyond just curiosities of history: it might show how society is knitted together, which would be powerfully useful.

Inspired by Marvin Gaye Estate versus Blurred Lines.

[omgbdhjk] Panning and rotating rectangle

Tile a rectangular image, then place a rectangular viewport the same size as the image on the plane.  No matter the offset or orientation, exactly the entire image will be visible, though cut up.  (Is this fact interesting or obvious?)  Slide and turn the viewport around to offer different views of the same image.  Easiest is irrational slope and constant irrational rate of rotation, so the viewport will never repeat.

[fprrzelf] Copyright education at classical music concerts

Distribute educational material at performances regarding what copyright duration was when the performed work was created.

"XYZ was incentivized to compose the work you will hear tonight by copyright protection -- exclusive right to royalties -- of 0 years.  Compare this to copyright term for works composed nowadays: 95 years.  Is music better nowadays?"

Wednesday, February 08, 2017

[cmrfqpcl] Before and outside the Big Bang

The universe is and has always been infinite in size.  It has also existed forever in time.  We'll explain these assumptions later (tl;dr: Occam's Razor).  We are challenging the conventional notion that the universe had a start point in time, the Big Bang, and at that start point it was infinitesimally small.

A long time ago, the infinite universe was very hot and dense, so hot that the 4 known fundamental forces (gravity, strong, weak, EM) were merged into one.

Space expanded, so the universe got cooler.  Note that space remained infinite in size as it expanded, kind of like multiplying infinity by 2.  It's twice as large, but still infinity.  Things close together got further apart throughout the infinite universe as it expanded.

At some time point, Tgravity, it got cool enough for gravity to separate out from that one merged fundamental force.  Tgravity is a negative number for reasons we'll explain later.

Space expanded more, the universe got cooler, and at time point Tstrong, the strong nuclear force separated from the electroweak force.  Actually, we'll define Tstrong to be 0 for reasons we'll explain later.  So, immediately after this time 0, the forces were gravity, strong, and electroweak.

Space expanded (a lot) more, the universe got cooler, and the weak nuclear force separated from the electromagnetic force at time T_weak.  We actually know the value of T_weak to be about 10^-12 seconds based on particle accelerator experiments which can recreate the temperature of the universe (slightly) before T_weak.

We wrote above that space expanded "a lot" because, during some interval between 0 and T_weak, inflation happened.  It happened much closer to the 0 end.  More about that later.

The unconventional selection of Tstrong=0 is motivated by philosophical and practical considerations of what we can and cannot know.  There is a huge energy gap (10^12) between the electroweak and Grand Unified Theory scales: GUT explains the universe between Tgravity and 0, i.e., "negative time".  We will "never" be able to do experiments at the GUT scale: certainly not on Earth.  They are too unimaginably difficult: a trillion times more energy than the LHC.  Therefore we will never be able to know (that is, experimentally confirm) what the universe was like at or before time 0.  Incidentally, this means we will never know what time Tgravity was.

Similarly, we will never, ever, be able to experimentally confirm a Theory Of Everything (TOE) a.k.a quantum gravity a.k.a. string theory, a theory about what universe was like before the negative time point Tgravity, called the Planck scale.  In fact, time points before Tgravity might be ill-defined, because gravity separating out from the other forces means that only then did spacetime come to exist, so only then did time itself and consequently things like causality come to exist.  Before that point, timey-wimey wibbly wobbly.

(Maybe our descendants or alien civilizations will prove me wrong about what we can scientifically know, then we will regret this choice of zero (like Fahrenheit).  Scientific American's The Amateur Scientist did whimsically propose building an Ultimate Collider to test TOEs.)

Actually time might have also behaved funny during inflation: inflation did very strange things to space, so we speculate it also did strange things to spacetime and consequently time.  We may never understand inflation: it is so close to the GUT scale that experiments probing it seem almost as unimaginable as experiments testing a GUT.  It might have been better to define the end of inflation as the zero time point.  Only then did time start flowing the way we experience it now.  The quoted value of T_weak above is the time interval between the end of inflation and the end of the electroweak epoch.

Nevertheless, even though we will never know what the universe was like before time 0, we will assume that it always existed all the way out to negative infinity.  (We may need some yet undefined notion of what it means to exist before time itself began to exist at Tgravity.)  We assume infinite existence because it is the simplest model: Occam's Razor.  If we don't assume it, then we have to explain more complicated things: what existed before the universe burst into being?  Why did the universe burst into being?

Similarly, we assume that space is infinite. Currently, there is a finite patch of the universe we can see, because light has had time to reach our eyes.  Astronomers call this the Observable Universe, which is kind of a confusing name.  Better would have been Our Finite Patch Of The Universe.  Lots of confusion stems from conflating "universe" (assumed infinite) and "observable universe" (definitely not infinite).  Even though we cannot see beyond Our Finite Patch Of The Universe, we assume that the universe extends infinitely beyond it.  This is again the simplest model.  Otherwise we have to explain complicated things like, what does the edge of the universe look like?  What exists beyond the edge?

Similarly, we also assume that space has always been infinite.  At no point in time was the entire universe compressed into a point.  Things were denser and closer back then, but the extent of space was always infinite.  This is the simplest model: otherwise, we have to explain complicated things like, what existed outside of the finite (in fact zero-volume) point?  How did the universe transition instantaneously from 0 to infinite in size?

When cosmologists say, at such and such point in time, the universe was the size of a grain of sand, it is actually confusing shorthand for, the chunk of space that eventually expanded to Our Finite Patch Of The Universe was, back then, the size of a grain of sand.  It is just shorthand for the expansion factor between then and now.  That sand-grain-sized chunk of space back then was still part of an infinite universe.

The conventional narrative that the universe started from an infinitesimal point at the Big Bang is derived from running the equations of General Relativity backward in time.  If you do that, it does predict a singularity, and conventionally, that singular point is defined as the zero point in time, as opposed to a later point in time Tstrong defined as zero above.  However, running GR backwards all the way to the singularity is a little bit silly, because other things happen on the way to the singularity, namely some GUT, some TOE, which might interfere with the prediction of the singularity.  Or, in this essay, we assume they definitely will interfere and prevent the singularity because otherwise it leaves us with the complicated questions mentioned above that we are avoiding by Occam's Razor.

Throw a ball, and we can plot a parabola, then extrapolate where the ball will land.  This is analogous to extrapolating that the universe began as a singularity.  However, we are actually throwing a ball toward a wall of fog.  This fog corresponds to the GUT scale, time 0, that we will never have knowledge beyond.  We have no idea what lies in the fog; we have no idea whether the ball will land at the extrapolation of the parabola into the fog.  This essay assumes that something analogous to a bottomless pit exists inside the fog: the ball never lands; the universe has no beginning.

I suppose the more accurate analogy is we see a ball having exited from a foggy area traveling a parabolic path.  From where and how was the ball launched?

In critique of this essay: because we will never be able to test a GUT or TOE, the only way to choose among them seems to be by Occam's Razor again.  Are there simple such theories which permit the GR singularity or something analogous?  There probably are.

[fsnyoxuk] Data horcruxes

1. Start with some data.
2. Cryptographically sign the data.
3. Expand it with an error correcting code.
4. Break it up into pieces, perhaps individual bytes.
5. Choose a nonce, which will be common to all the pieces.
6. Add the nonce to each piece.
7. Add a sequence number to each piece.
8. Cryptographically sign each piece.
9. Expand each piece with another error correcting code.
10. Scatter the pieces.

The data will be difficult to destroy: the data can be reconstructed from a partial collection of partially damaged pieces and verified as authentic.

Kind of a solution in search of a problem.  Previously.  The signature and nonce make the minimum size of a piece annoyingly large.

The total number of pieces needs to be encoded somewhere.  The public key to verify the signatures needs to be stored elsewhere.  Other problems?

[ejwpwgww] Earth as panspermia origin

Large meteorites have hit the earth, creating a splash, launching chunks of earth into space.  Many of those chunks probably had living microorganisms which hitched a ride to whichever next planet that launched chunk of earth then smashed into.

Inspired by Martian meteorites that landed on Earth.

In this way, Earth has probably seeded a large, perhaps cone-shaped, region of the galaxy (or beyond) with life over that past 3 billion years.  Maybe like a contagious constantly traveling constantly sneezing patient zero.  How large is that region?

If there is a habitable planet within that region, what is the probability it escaped infection by Earth?  On one hand, space is big.  On the other hand, there has been a lot of time.  There could also be indirect infections where a planet seeded by Earth thrived, got hit by another meteorite, then chunks of it and its life get launched into space (propagated outbreak in epidemiology).

Of course, one wonders whether Earth itself got seeded from elsewhere.

[bixwyjax] ZPAQ busy beaver

The ZPAQ compressed file format specifies a virtual machine.  This suggests a contest of designing small files which will uncompress (decompress) into large but finite output.

Also, uncompress into interesting output.

[vlvnvito] Monoculture trees

Many famous examples of a disease killing a lot of plants are in agriculture (potato famine) where monoculture likely played a large role in the disease being able successfully infect many plants.

Exceptions seem to include chestnut blight and Dutch elm disease.  Were those trees somehow monocultural also?  What other monoculture trees are there where a disease might quickly kill most of them someday?

[ajwillzv] Pen is mightier than the sword

Substitute the string (without spaces) mightierthanthesword for the word penis in contexts where the latter might be censored.

Ironically, the original aphorism justifies censorship.

[bqbrezti] Is the particle physics desert empty?

The particle physics desert theory states that all fundamental particles have mass less than 1 TeV or greater than 10^13 TeV.  There is a desert between 1 and 10^13.

Can this theory be tested without having to test all the way up to 10^13 TeV?  If one has to do that, the theory is useless.  In particular, can the theory be proven correct (a weird thing to do to a theory) by only testing up to 1 TeV?

What is the nature of inflation, neutrino mass, dark matter, and dark energy?  Assuming the desert theory is proven true, will the answers to these questions definitely be found before 1 TeV?  Or, if the answers aren't found, will we definitely know we will not be able know the answers until we can build (at least) GUT-scale particle accelerators?

Of course, astronomers can observe relics from when the universe was at GUT-scale energy to try to answer these questions.

[ariplvdv] GUT and TOE beauty contest

Grand Unified Theories and Theories Of Everything are difficult to test and disprove.  They cannot be tested on current or even reasonably imaginable future particle accelerators.  We can check that a theory agrees with observations in the low-energy regime, and whether they match astronomical observations of faint relics from the universe's high-energy era.

Given these difficulties, selecting a good such theory becomes more of a question of aesthetics than science.  Which such theories are beautiful and which are ugly?  What defines beauty of a theory?  Occam's Razor is likely important.  Given a precise definition of beauty, find, perhaps computationally, the optimally beautiful theory which agrees with weak and faint observations.

The E8 Theory Of Everything likely attracted attention due to its aesthetics.

Tuesday, February 07, 2017

[euzzsqsw] Inflation inside a black hole

Inside a black hole, density and consequently temperature become very high.  At some point of collapse, well inside the event horizon, does the density and temperature resemble the early universe?  Hypothesize inflatons then get produced, causing inflation, a repulsive force which counterbalances gravity and prevents the core from collapsing to a singularity.  This is analogous to the various other processes that counterbalance gravity in normal stars, white dwarves, and neutron stars.

Previously (1) (2) (3)

Inflation seems like an extremely powerful phenomenon based on what it did to the universe.  How much mass would a black hole need to have to overpower inflation?

Philosophically, do we even care what is going on inside an event horizon?

I suppose it matters for black holes with no event horizon, naked singularities, which, if the above hypothesis is correct, won't be singularities, just extremely dense chunks of matter surrounded by very weird spacetime.

[xxmnulmq] Chestnut brain

The inside of chestnuts look strikingly like brains.  What evolutionary fitness function were each tying to optimize that they both converged on the same shape?  Why do chestnuts look the way they do compared to other nuts?  (Possibly centuries of cultivation.)

[cqluwcao] The lighter particle was harder to make

It is curious that the Higgs boson, mass 125 GeV, was finally first seen at the LHC which has 8 TeV (maybe 7) collision energy, whereas the top quark with greater mass 172 GeV was seen earlier at the Tevatron with its only 1.96 TeV collision energy.

Electron-positron colliders tend to be more efficient in terms of converting collision energy into particle mass, but as best as I can tell, neither particle was seen at the LEP which had collision energy 209 GeV.  This might be explained by energy being converted into mass must be produced as matter-antimatter pairs (more generically, some collection that will decay and annihilate back to no mass), so even a perfectly efficient LEP will max out at particle mass 104.5 GeV.

Monday, February 06, 2017

[kxwixtrh] Density of Carmichael numbers

Estimate the number of Carmichael numbers less than x as

c(x)=x*exp(-k*log(x)*log(log(log(x)))/log(log(x)))

Formula (by Erdos, 1956) is from the Wikipedia page.  There is an unknown constant k, which will be discussed later.

The probability that a number is a Carmichael number is p=c(x)/x.  For the generation of a PGP RSA key, we need 4 primes: 2 primes for the master key, 2 primes for the subkey.  The probability that any of the 4 are Carmichael numbers is approximately 4*p (truncating the binomial expansion).  Primes are half the size of the modulus, so the probability of generating a bad key (at least one Carmichael number) is

pr(n)=4*c(2^(n/2))/(2^(n/2))

We explore various values of that unknown constant k.  It turns out the result is quite sensitive to k.  Empirical calculations on the Wikipedia page up to 10^21 (approximately 2^69) suggest k is around 1.86.

If we estimate k=1.8, then we get the following results.  The probability of failure on (absurdly small) 512-bit keys: 3.56e-44 ; 2048-bit keys = 3.63e-159 ; 4096-bit = 3.54e-303 .

If we far more conservatively estimate k=1.0, then 512-bit = 1.35e-24 ; 2048-bit = 1.76e-88 ; 4096-bit = 1.73e-168 .

The probabilities are probably acceptably small to not worry about accidentally encountering a Carmichael number, though you never know about that constant k.  So Fermat tests are fine, no need for the additional complexity of Miller-Rabin.

Friday, February 03, 2017

[zihjrysc] Alpha and Omega

It starts out a children's story, but ends apocalyptic.  Inspired by Harry Potter.  But no one author is skilled at all the styles, so it may be better written collaboratively.

Or, it ends as survival horror.

[vnqzswbf] Ender Purim

Ender spares the life of just one enemy Formic instead of genociding them all.

Saul spares the life of just one enemy Amalekite (Agag) instead of genociding them all as God Commanded.  Haman, a descendant of that survivor, goes on to wage genocidal warfare against the Jews.

Tell a continuation of the Ender saga in which the Formics, descendants of the egg Ender spared, return many generations later and wage genocidal warfare against the humans.  Perhaps with some science fiction time travel (Star Trek: First Contact), that second war is the one Mazer Rackham wins.  Celebrate yearly the anniversary of Rackham's miraculous victory.

Thursday, February 02, 2017

[plwpicim] Direct versus indirect encryption

Direct: Ciphertext encrypted with a key.  Key derived from a password.

Indirect: Ciphertext encrypted with a key X.  Key X is encrypted with a key Y.  Key Y is derived from a password.

There are probably more official names for these.

Indirect encryption allows changing the password without having to reencrypt everything, so seems attractive for things like disk encryption or filesystem encryption.  However, it seems more vulnerable to attack.  If the attackers can get a hold of encrypted X, then they can do a password guessing attack against it, which if successful could be useful even after the user changes their password (changing Y).  What other attacks?  What defenses are there?

Wednesday, February 01, 2017

[dsfjlwim] Transmitting compressed files

A certain absurdity is possible when transferring a file over scp between two filesystems that support transparent compression, e.g., btrfs: File is stored compressed with zlib (other options possible; this is the most absurd).  Uncompressed by a kernel module to make it visible to userspace program.  scp compresses the file with zlib for transmission over the wire.  Receiving scp unconpresses the file, then writes it to the filesystem.  Kernel recompresses the file with zlib to write to the compressed filesystem.

[fmsethjs] High precision UT1

The Earth's rotation is not stable, causing the need for leap seconds.  How precisely is its instability known from moment to moment?

The way to measure it is probably to look at stars with a telescope, noting the time and telescope orientation, though the details of how to do it seem tricky.  Maybe quasars and radio telescopes.

Inversely, how do astronomers keep their telescopes stably pointed at the same location in the sky for long exposure photographs if the earth to which their telescopes are anchored rotates unstably beneath them?

Sit down, then stand up.  Your movement altered the Earth's rotational moment of inertia and consequently its rate of rotation, which in principle could be detected, then maybe the measurement deconvolved to recover your movement.  Anything that involves movement of mass, for example air molecules vibrating in speech, affects the Earth's rotation.  We can imagine a sophisticated surveillance system that monitors actions on Earth by seemingly absurdly observing distant quasars to extremely high precision.

Less ridiculously, the unstable rotation of the Earth could be a source of entropy for a random number generator.  There used to be a random number generator powered by a Lava Lamp.  The Earth is the ultimate Lava Lamp: variations in rotation are caused by movement of magma within the mantle or core.  How many bits per second of entropy can the Earth's rotation produce?

[lrrrishw] Union

The word union could be pronounced onion as if it had the prefix un-.  Onion, then, could be pronounced differently, too.

Tuesday, January 31, 2017

[jniqqxho] Decoupled digital camera

Create a camera whose optics (especially telephoto lens) are not rigidly connected to its display, which might be a smartphone.  Wired or wireless connection.

Inspiration was wanting binoculars to see something far away, but only rarely wanting them.

[isxqeiqb] Formal software engineering

Are formal tools for software development -- proving correctness of code -- needed more for the "make it work right" or the "make it work fast" phase of software development?

[rwkfvqlg] Pythagorean rectangle

The square of the diagonal of a rectangle is equal to the sum of the squares of the sides of a rectangle.

Or, the square of the hypotenuse of a right triangle is equal to the sum of the squares of its two legs.

The former seems a less imposing (less jargon filled) way of introducing the Pythagorean theorem.

[cldreftj] Bottom class not assimilating

In some societies, we observe a seeming paradox: a bottom social class refusing to assimilate, refusing to give up the social markers that identify them as part of the bottom class, markers by which they then get discriminated.  Possible explanations:

The paradox is an illusion.  We only notice the few unusual outliers of those refusing to assimilate.

Assimilation takes generations due to the time constant of psychological identity.

Maintaining the social markers buys into a social safety net within the community of that class, which is more valuable than the risky benefits of assimilation.  They are making a rational decision.

The opportunities for assimilation are actually illusions.  No amount of dropping social markers will disconnect you from your social class, because the upper classes want to keep the lower classes well populated.

[vpswchba] Human rights abuses are inevitable with globalization

Hypothesize that with globalization, human rights abuses or more generally governments or societies treating (at least some of) their citizens terribly, or allowing them to be treated terribly, are inevitable.  This is a consequence of the combination of one economic, one political, and one psychological effect.

The economic effect is that a society/government which exploits its population for labor at low wages will "win" in a globalized economy.  The world market sets the price of goods.  If one country refuses to exploit -- perhaps instead setting a minimum wage or providing social services to its population, then another country will exploit, not doing those things.

The political effect is that laws end at country borders.  One country might decide exploitative labor conditions are bad and outlaw them, but they can't force another country to do the same. (Or maybe they can, with military meddling and cross-border propaganda, most famously the Communist movement.)  Even those being exploited may wish to maintain the status quo.

The psychological effect is that people can survive, thrive, and even feel satisfied and happy in very bad conditions.  (We still need to understand when, why, and how.  Previously somewhat related.)

If this hypothesis is true, then it is dismal science, reminiscent of Malthus.  It will always be a race to the bottom.

Perhaps then blame globalization: globalization is free (or freer) trade agreements, low cost international shipping (perhaps results of technology and world peace on the high seas -- few pirates).

[gbupycok] Classical to quantum game

Create a game which starts out obeying the laws of classical physics, but as the levels get harder, more and more quantum mechanics effects occur.  Perhaps the story is that the scale gets smaller or the temperatures get colder.

Quantum tunneling to get through classically impenetrable barriers.

Ends with the player being able to intuitively think quantumly.

Monday, January 30, 2017

[tdoyjpic] Testing the Collatz conjecture on 10-million-bit numbers

tl;dr: No counterexamples to the Collatz conjecture were found.

Assume the Collatz conjecture is false; that is, there exist starting numbers which do not end in the 4-2-1 cycle.

Further major assumption: these counterexamples become much more numerous for larger integers; the reason the Collatz conjecture has seemed true in the range that it has been verified so far is because of the Strong Law of Small Numbers.

Then, we might be able to find counterexamples simply by testing some random large starting numbers.  One problem: when testing a number, how can one tell if it is a counterexample, not headed to a 4-2-1 cycle?  How can one tell if one has iterated enough?

Simple practical solution: start by testing small random numbers and gradually increase the size.  The smaller numbers will all reach 4-2-1, and the number of iterations and computation time can be recorded for each of them.  As the numbers get larger, the number of iterations and computation time will increase sort of smoothly.  If we encounter a number which is taking much longer than the extrapolation of the trend seen thus far, then we know something weird is going on with it.

We tested random starting numbers as large as 10313058 bits, the last one taking 74494956 iterations over 12 hours of computing time (though it was not very optimized code).  Every number tested converged to 4-2-1.

Source code in Haskell and logs.

We wish we had SIMD accelerated (e.g., GPU) small*large arbitrary precision multiplication (previously mentioned) to compute 3*x for large x.  x/2 could also be accelerated with SIMD.  x+1 will only astronomically rarely overflow the least significant limb.

Previous similar attempt, which was much slower because then large integers were represented as lists of Bool.

Wednesday, January 25, 2017

[khvlvecm] Simplified syllables

10 initial consonants: (null) m p b t d s z k g

5 vowels: i e a o u

2 ending consonants: (null) n

Nice round 100 though that was not intentional.

A little bit awkward for English spelling because g changing its sound before i and e.  t d s z change their sound in Japanese depending on the vowel.

Tuesday, January 24, 2017

[zghmwfxn] Carlsen - Karjakin final mating combination

We analyze the variations after 49.Rc8+ all the way to checkmate.  Moves that are the only moves which preserve the win are marked with a single exclamation point.  If there are multiple winning moves tied with the same shortest distance to checkmate, then all are given (which is unconventional for analysis).  We also give all black defenses, even if they get checkmated quicker.  The two variations with 54.Qd5+ cause checkmate 1 move slower so are marked with a question mark.

[Event "Carlsen - Karjakin World Championship"]
[Date "2016.11.30"]
[Round "13.4"]
[White "Magnus Carlsen"]
[Black "Sergey Karjakin"]
[Result "1-0"]
[ECO "B54"]
[EventDate "2016.11.30"]

1.e4 c5 2.Nf3 d6 3.d4 cxd4 4.Nxd4 Nf6 5.f3 e5 6.Nb3 Be7 7.c4 a5 8.Be3 a4 9.Nc1 O-O 10.Nc3 Qa5 11.Qd2 Na6 12.Be2 Nc5 13.O-O Bd7 14.Rb1 Rfc8 15.b4 axb3 16.axb3 Qd8 17.Nd3 Ne6 18.Nb4 Bc6 19.Rfd1 h5 20.Bf1 h4 21.Qf2 Nd7 22.g3 Ra3 23.Bh3 Rca8 24.Nc2 R3a6 25.Nb4 Ra5 26.Nc2 b6 27.Rd2 Qc7 28.Rbd1 Bf8 29.gxh4 Nf4 30.Bxf4 exf4 31.Bxd7 Qxd7 32.Nb4 Ra3 33.Nxc6 Qxc6 34.Nb5 Rxb3 35.Nd4 Qxc4 36.Nxb3 Qxb3 37.Qe2 Be7 38.Kg2 Qe6 39.h5 Ra3 40.Rd3 Ra2 41.R3d2 Ra3 42.Rd3 Ra7 43.Rd5 Rc7 44.Qd2 Qf6 45.Rf5 Qh4 46.Rc1 Ra7 47.Qxf4 Ra2+ 48.Kh1 Qf2? {allows the mating attack} 49.Rc8+!! {not the only move that wins, but the fastest and of course prettiest} Kh7 ( 49...Bd8 50.Rxd8+! Kh7 51.Qh6+! Kxh6 ( 51...gxh6 52.Rxf7#! ) 52.Rh8#! ) ( 49...Bf8 50.Rxf8+! Kxf8 ( 50...Kh7 51.Qh6+! Kxh6 ( 51...gxh6 52.R5xf7# ) 52.Rh8#! ) 51.Rxf7+! Ke8 ( 51...Kg8 52.Rf8+! Kh7 53.Qf5+! Kh6 ( 53...g6 54.Qxg6# ) 54.Rh8# ( 54.Qg6# ) ) 52.Rf8+! Kd7 ( 52...Ke7 53.Qf7# ) 53.Qf5+ ( 53.Qf7+ Kc6 54.Rc8+ ( 54.Qd5+? Kc7 ( 54...Kd7 55.Qb7+ ( 55.Rf7+ Kc8 ( 55...Kd8 56.Qxd6+ Ke8 ( 56...Kc8 57.Qc7# ( 57.Qf8# ) ) ) ( 55...Ke8 56.Qe6+! Kd8 57.Qd7# ) 56.Qb7+ ( 56.Qc6+ Kb8 ( 56...Kd8 57.Qd7# ) 57.Qb7# ( 57.Qe8# ) ) ( 56.Qe6+ Kb8 ( 56...Kd8 57.Qd7# ) 57.Qe8# ) 56...Kd8 57.Rf8# ( 57.Qd7# ) ( 57.Qb8# ) ) ( 55.Qb5+ Kc7 ( 55...Ke7 ( 55...Ke6 56.Qe8# ) 56.Qe8# ) 56.Rf7+ Kb8 ( 56...Kc8 ( 56...Kd8 57.Qd7# ) 57.Qe8# ) 57.Qe8# ) 55...Ke6 56.Re8+ ( 56.Qf7+ Ke5 57.Qd5# ) 56...Kf6 57.Qe7# ) 55.Rf7+ Kd8 ( 55...Kb8 56.Qb7# ) ( 55...Kc8 56.Qb7+ Kd8 57.Rf8# ( 57.Qd7# ) ( 57.Qb8# ) ) 56.Qxd6+ Ke8 ( 56...Kc8 57.Qc7# ( 57.Qf8# ) ) 57.Rf8# ( 57.Qd7# ) ( 57.Qe7# ) ) 54...Kb5 55.Qc4+ Ka5 56.Ra8# {Longest defense} ) 53...Kc6 ( 53...Ke7 54.Qf7# ) ( 53...Kc7 54.Qc8# ) 54.Rc8+ ( 54.Qd5+? Kc7 ( 54...Kd7 55.Rf7+ ( 55.Qb7+ Ke6 56.Re8+ ( 56.Qf7+ Ke5 57.Qd5# ) 56...Kf6 57.Qe7# ) ( 55.Qb5+ Kc7 ( 55...Ke6 56.Qe8# ) ( 55...Ke7 56.Qe8# ) 56.Rf7+ Kd8 ( 56...Kb8 57.Qe8# ) ( 56...Kc8 57.Qe8# ) 57.Qd7# ) 55...Kc8 ( 55...Kd8 56.Qxd6+ Kc8 ( 56...Ke8 57.Rf8# ( 57.Qd7# ) ( 57.Qe7# ) ( 57.Qf8# ) ) 57.Qc7# ( 57.Qf8# ) ) ( 55...Ke8 56.Qe6+! Kd8 57.Qd7# ) 56.Qc6+ ( 56.Qe6+ Kd8 ( 56...Kb8 57.Qe8# ) 57.Qd7# ) ( 56.Qb7+ Kd8 57.Rf8# ( 57.Qd7# ) ( 57.Qb8# ) ) 56...Kb8 ( 56...Kd8 57.Qd7# ) 57.Qb7# ( 57.Qe8# ) ) 55.Rf7+ Kc8 ( 55...Kd8 56.Qxd6+ Ke8 ( 56...Kc8 57.Qc7# ( 57.Qf8# ) ) 57.Qe7# ( 57.Rf8# ) ( 57.Qd7# ) ( 57.Qf8# ) ) ( 55...Kb8 56.Qb7# ) 56.Qc6+ Kd8 57.Qd7# ) 54...Kb7 55.Qd7+! Ka6 56.Ra8#! {Longest defense} ) 50.Qh6+! ( 50.Qh6+ Kxh6 ( 50...gxh6 51.Rxf7#! ) 51.Rh8#! ) 1-0

It surprised me that Carlsen worked out that 49...Bf8 is mate, even though it is quite long.  In the Longest defense variations, the black king gets chased all the way to the A file.  However, the variations with 54.Qd5? have that as a more natural move, boxing in the black king, and many end with a more natural back rank mate.

Also a bit surprising that Karjakin did not play 49...Bf8.

Create a tool to produce analyses like this, with a UI to navigate variations including doing something efficient with transpositions.

[xhkgvkxz] Ticking of several imperfect clocks

Place multiple clocks ticking seconds near each other.  If they are not in sync, the ticks with repeat a rhythm.  If they are imperfect, the rhythm will change over time.

[tokcyiph] Civil service hierarchy

Presidents are selected from state governors.  Governors are selected from mayors and city managers of large cities.  They then from small cities.

Similarly several levels of legislatures and legislators.

Similarly several levels of judges.

All this is already done mostly informally.  New wrinkle: each level periodically shuffles its members, e.g., governors move to governing a different state.  This demonstrates how they perform under broadly varied conditions, useful information when selecting for the next higher level up the hierarchy.

Flaws in the system:

Only the president ever deals with military, monetary policy, international diplomacy.

Local people might not be lead or represented by one of their own, if things keep getting shuffled.  Maybe apply only to the executive and judicial branches.

Is corruption worse under this system?

By requiring candidates for a higher post all to have experience at a lower post, it allows comparing ability: who did better governing (say) a poor state?  Judging by ability is fine, but then where and how does politics -- differences in opinion -- enter?  (It will inevitably do so.  Perhaps corruption.)  It would be nice if the legislature could be kept the only political branch of government, but seems unlikely.

Political enemies might try to make go poorly governor's term (hurting the residents in the process) for the larger prize of derailing that person's chances at presidency.

Being effective at a certain level of the hierarchy might require a lifetime to learn.  There aren't enough lifetimes to learn the next level.

What are the required qualifications for Vice President?  Maybe Lt. Governor.  Then, we assume vice-presidency qualifies you for presidency.

[lsftlwrt] Eliminating mind-control microorganisms

There are microorganisms which do mind control on humans, famously Toxoplasma.  In the near future, I predict we will discover many more of them, catalogue them, measure their prevalence, and eliminate them using public health techniques.

Humanity might then see a new age in which a vast mental fog has gotten lifted.

But what if it was a microorganism infection in the population that helped someone achieve or maintain power?  Would they be keen to deploy public health techniques to eliminate that microorganism?

What if it is a microorganism that is helping prevent anarchy?  Perhaps encouraging people to be docile, or not to fight each other, or form social connections evolutionarily advantageous for the microorganism to propagate?

[jlrxidcc] Lentils around an air bubble

Observed in water a few dry lentils clustered around an air bubble, sticking to it by surface tension.  The cluster was heavier than water.  Observed multiple such clusters. The air bubbles wer probably from aerated water poured on the dry lentils in a pot.

Monday, January 23, 2017

[hkmdbtur] Boston forgives

The concept of forgiveness -- especially the inability to do so -- often reveals the hate or contempt that was there all along, perhaps previously concealed by political correctness.

Inspired by a Boston Forgives T-shirt, in response to the militaristic and Islamophobic Boston Strong.

[pbgsqvlp] Maximizing golf

Hit the ball as far as possible, modifying the ball and bat and tee as much as wanted (ball must remain spherical; human power only).  Baseball, tennis, jai alai.  Probably a sling launching a small dense ball will be the max.  Previously, throwing a projectile without a launching tool.

Probably need to disqualify building a trebuchet with weight lifted by human power.

[vjcpssrm] Notes on installing Ubuntu 16.04 to a USB stick

Created Ubuntu 16.04 amd64 server installer image on a USB stick with unetbootin.  This resulted in ominous warning message on boot saying things might go wrong if one uses unetbootin.  The warning is probably due to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=775689 or the Ubuntu equivalent.

Retried creating using Ubuntu's official tool, usb-creator-gtk.  This results in "gfxboot.c32: not a COM32R Image" https://bugs.launchpad.net/ubuntu/+source/usb-creator/+bug/1325801 . Workaround described at http://ubuntuforums.org/showthread.php?t=2249701 , i.e., type help and press enter.

The underlying problem with the unetbootin route appears to be Debian/Ubuntu's fault; the filenames have length longer than 64 characters in length, violating the Joliet file system standard. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=775689#94

Used mkusb, which requires a ppa:
sudo add-apt-repository ppa:mkusb/ppa https://help.ubuntu.com/community/mkusb

On a Dell optiplex, the key to trigger boot options is F12.

First interesting experiment was to try to install everything inside LVM, without /boot as a separate partition outside of LVM.  It used to be one needed /boot outside of LVM, but nowadays grub has an lvm module.  I've been bitten too many times by a too small /boot partition filling up with too many old kernels, so we'd like to avoid a separate /boot partition.  Let's see if it works.

All these experiments were done by installing Ubuntu onto a 4GB USB key, a somewhat unusual install target (and as mentioned above, from USB key as well).

From experience, choose en_US.UTF-8 locale, not C, because desktop environments fail when there is only a C locale, e.g., terminal programs don't start.

(Experience was, almost no application starts.  uxterm worked.  Trying to start gnome-terminal from inside uxterm gives
Error constructing proxy for org.gnome.Terminal:/org/gnome/Terminal/Factory0: Error calling StartServiceByName for org.gnome.terminal: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.gnome.Terminal exited with status 8
)

Select noatime as a filesystem mount option because USB SSD has limited writes.

Things went smoothly until the grub install step.  Choosing any of the disks resulted in an error, e.g., "unable to install grub in /dev/mapper".  One needed to manually specify /dev/sdb . To know for sure what disk is being installed, get a shell (ctrl-alt-f2) and do pvdisplay.

After install, this much space was used: (1K blocks)
/dev/mapper/vg--one-root 3800784 total 1371468 used 2216532 available

Here are the autogenerated grub.cfg and fstab.

The next adventure was to repeat the same idea, but all of LVM inside an LUKS/dm-crypt encrypted container.

Created the dm-crypt container with System Rescue CD, because I wanted to customize the number of rounds of password hashing.  ArchWiki is pretty good: https://wiki.archlinux.org/index.php/Dm-crypt

system rescue cd 4.7.3, with docache is nice.

During Ubuntu install, Grub install failed.  Looking at the logs (ctrl-alt-F4) found:

grub-install: error: attempt to install to encrypted disk without cryptodisk enabled.  Set GRUB_ENABLE_CRYPTODISK=1

Workaround for this involved getting a shell (ctrl-alt-f2), editing /target/etc/default/grub in the installed system to have GRUB_ENABLE_CRYPTODISK=y . We do "y" and not "1" as the error message states, because https://savannah.gnu.org/bugs/?41524 which has not been fixed in this version of Ubuntu.

Use nano as the available editor in the install shell.

After grub, install completes successfully.  Here are the autogenerated grub.cfg and fstab.

Booting requires typing the disk unlock password twice, once for grub, then again for kernel filesystem mount.  The amount of time it takes to verify a password differs greatly between them.

Aside: http://www.pavelkogan.com/2014/05/23/luks-full-disk-encryption/ describes how to only need to type the password once, into grub.  grub then finds a keyfile on the unlocked disk and passes it to the kernel.  I did not try this hack.

Typing password twice is fine for now.  The grub password prompt, in contrast to the kernel password prompt, does not display a text cursor, so one cannot obviously see if one's keyboard is working properly.  Hitting Enter after typing the password does not advance a visible cursor to the next line.

Ran tasksel to install "Xubuntu minimal".  Also installed firefox.  Also did aptitude full-upgrade, installed a new kernel.  The new kernel did successfully boot, so the necessary grub magic with LUKS and dm-crypt just worked.

During and after full-upgrade, there is a persistent error on both shutdown and boot about lxd-containers.service.  The workaround is to just once manually restart lxd and reboot. http://ubuntuforums.org/showthread.php?t=2326866

sudo service lxd restart

After removing the old kernel, total 1k blocks used according to df is 2751788, or 77% of the 4 GB drive.  Peak usage was 93% during the upgrade.

Xubuntu does provide the Guest login account.

Next experiment, same but install "Lubuntu minimal".  Lubuntu is lighter than Xubuntu.  After similar install, 2459872 K used, 69%

Next experiment, try btrfs because it has transparent compression.  All these experiments were done on a slightly different USB stick, so unfortunately cannot compare with previous experiments.

Sandisk Cruzer Fit is a compact USB stick that does not stick out very much from the USB port.

Setting btrfs compression is done at mount, not at filesystem creation.  We use Ubuntu Expert Mode (f6 "Other Options" at the boot screen) in order to have pauses between steps.

Creating an encrypted container, then directly creating btrfs inside the encrypted container fails. Red screen "Encryption configuration failure" "You have selected the root file system to be stored on an encrypted partition.  This feature requires a separate /boot partition on which the kernel and initrd can be stored." "You should go back and setup a /boot partition".

Workaround: btrfs inside of LVM inside of encrypted container.

noatime is available as an option in the installer UI.

Expert Mode allows a pause between "Partition disks" and "Install the system".  Get a shell, then 'mount -o remount,compress=lzo,ssd /target'.  /target/home automatically remounts to pick up compress=lzo, according to mount.

Where it asks, choose a generic kernel and a big initrd with "everything" because the usb key might be used to boot a different computer.

System Rescue CD seemed to have problems unmounting LVM on shutdown.  Use vgchange -an and lvchange -an .

Chose to install grub to the EFI removable media path

Edited /target/etc/fstab to have compress=lzo as options.  Also ssd option.

In retrospect, encrypted home directory and filesystem compression don't work so well.  ecryptfs does not have compression https://bugs.launchpad.net/ecryptfs/+bug/492237 , marked "Won't fix".  Read about CRIME and BREACH exploits against compress-then-encrypt.

Mistyping the disk encryption password into grub results does not result in a prompt to try again, instead a grub shell.  Reboot to try again.

Usually avoid recommended packages, especially on this limited disk system.  But xul-ext-ubufox seems like a good recommended package to install.

All the extra package installs were done in console (Ctrl-Alt-F1) to avoid disk space usage of starting X (window manager caches, initial config, etc).

after lubuntu, full-upgrade, firefox, xul-ext-ubufox on compress=lzo:
3903488 1K total, 2573916 used, 917156 available, 74%
after reboot
3903488 2545980 944868 73% (slight improvement)

which is higher than the 69% on a different disk

Plan is to add an additional usb stick if we need swap.

Unlocking the disk in grub 15.75 sec.  Later during Linux boot, unlocking the same disk on the same computer takes 6.68 sec.  Grub is clearly not using the most efficient password hashing code.

cryptsetup -i 20000 causes grub 26 seconds, linux 11 seconds.

btrfs: ubuntu creates @ and @home subvolumes automatically.

Instead of editing /etc/default/grub directly, let's try putting GRUB_ENABLE_CRYPTODISK=y in /etc/default/grub.d (not to be confused with /etc/grub.d).  It needs to be a file with extension .cfg . https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/901600 (the bug also complains about lack of documentation)  /usr/sbin/grub-mkconfig is the script that reads these files.

dmesg log, both in boot and during full-upgrade
BTRFS error (device dm-1): could not find root 8

aptitude full-upgrade took 40 minutes

during aptitude purge old linux image
File descriptor 4 (/) leaked on vgs invocation. Parent PID 5071: grub-probe

tasksel lubuntu minimal took 194 minutes.

after firefox and xul-ext-ubufox
3903488 2532984 951304 73%

which provides about 7MB more than lzo, but very small improvment, which is strange.

/usr/lib is 570 MB, and gzip -v reports 61.2% compression as a tar.

btrfs mount option compress-force=zlib: df says "3903488 2461512 1025048 71%", so not much of an improvement.

weird issue on console that the cursor jumps to column 1 after about a minute after boot.

Remove old linux kernel and linux-headers to save space
sudo aptitude purge linux-headers-4.4.0-21
which removes the generic package as a reverse dependency.

final
2298488 1143560 67%

Tuesday, January 17, 2017

[jaoosiou] How weird is your secret key?

Create a tool that will analyze the primes of your RSA secret key (private key): count the number of consecutive composites before and after each prime, and report how unusual (perhaps a p-value) that count is compared to what one would expect by the Poisson distribution and Prime Number Theorem.  This should be easy.

If primes were found by testing consecutive numbers (e.g., GnuPG) then these numbers will be aberrantly high.

[tadwseml] King swipe

Given a rectangular grid of points, draw a directed line segment from a point to an orthogonal or diagonal neighbor (one chess king move away).  How many possible line segments are there?

f(x,y)=2*(4*x*y - 3(x+y) + 2)
? f(3,3)
40
? f(3,4)
58
? f(3,5)
76
? f(3,6)
94
? f(4,4)
84
? f(4,5)
110
? f(4,6)
136

This provides a large number of simple input gestures for a touchscreen keyboard.  Every gesture is local, so it is easy to have explanatory annotations.

Other grids possible: consider 12 directions on a hexagonal grid.

Sunday, January 15, 2017

[ukjhypur] Shot put with running start

Shotput is an elegantly simple sport: throw the sphere as far as possible.

Consider lifting restrictions to make the sport simpler: the thrower does not need to stay inside the throwing circle but need only stay behind a line.  The thrower may use a running start.  This would make it a little like bowling.  Somewhat more complicated is to have a pool of water or sand pit in front of the line: the thrower is permitted to fall in the water or sand after throwing, so can be running at maximum speed at the point of release right at the line.  How would throwing technique change?

Apply similar anarchy to javelin: instead of a sphere, throwers are permitted to choose any shape of at least the specified weight.  If technology improves so that throwers are throwing too far (as has already occurred once in javelin), increase the minimum weight.  We probably need rules to forbid, for example, powered flight.  I suspect the best shape might resemble an Aerobee.

There will be a frontier of records of weight versus distance.

Friday, January 13, 2017

[kemfqqdz] Magnetic levitation

There are two major types of magnetic levitation for trains.

Electromagnetic levitation: German Transrapid, Shanghai Maglev.

Electrodynamic suspension: two subtypes.

With superconductors, relying on diamagnetism: JR Maglev

Without superconductors, relying on ferromagnetism: Inductrack, Hyperloop.

It is very surprising that the superconductor subtype was developed (to a working full-scale prototype) before the non-superconductor version.  Superconducting magnets seem much more expensive and difficult.

[bwootzwo] Minesweeper scratch ticket

Minesweeper is the best puzzle to be made in physical form in the style of a scratch ticket.  (Not good for the actual lottery; people will repaint incorrectly scratched mines if money is riding on it.)  Perhaps it can be included as a fun irrelevant puzzle in lottery tickets.

What other puzzles would work?

Sudoku as scratch ticket might be a good way to temporarily conceal then deliver the solution.  It could include "mines" of squares that are not solvable by logic, though that might be a tricky thing to define.  Perhaps a sudoku that does not have a unique solution.

[xjujmpsg] Randomly discarding most odd numbers being tested for primality

The common but bad algorithm to find a random large prime number (e.g., for RSA) is first to generate a large number x, then test in order x, x+1, x+2,... until a prime is found.  This is bad because it is not uniform: it will preferentially select primes with large prime gaps preceding them.  We do not know yet if this bias causes significant cryptographic weakness.  GnuPG uses this algorithm, as of 1.4.21.

The right way to fix this is to generate complete new random numbers for each number tested for primality.  However, this consumes a lot of entropy.

We can partially repair the problem as follows: test numbers in order starting from x as before.  However, before testing a number for primality, reject it outright with high probability, perhaps 0.999.  Then, only 0.1% of numbers will be tested for primality, which will skip over many actual primes, so avoid many starting x's mapping to the same found prime x+i.

The pseudorandom number generator used for sampling the 0.1% probably does not have to be very good, so it can be quick, so very little computing time will be spent rejecting numbers.  Most of the computing time will be primality testing.

How good is this repair?  How strong does the sampling PRNG need to be?  Does it need to be cryptographically strong?

Another way to do it might be to select a large random-sized jump to each new number to be tested.

[ntmxyalx] Least common multiple of the first N integers

We give the size of the number in bits, so logarithm base 2 of OEIS A003418 Previously.

p=1 ; for(i=1 , 733 , p=lcm(i , p) ; printf("%d %d ; " , i , floor(log(p) / log(2))))

Omitting repeated values for compactness (log of A051451):

p=1 ; for(i=1 , 733 , g=gcd(i,p) ; if(g!=i , p=p*i/g ; printf("%d %d ; " , i , floor(log(p) / log(2)))))

2 1 ; 3 2 ; 4 3 ; 5 5 ; 7 8 ; 8 9 ; 9 11 ; 11 14 ; 13 18 ; 16 19 ; 17 23 ; 19 27 ; 23 32 ; 25 34 ; 27 36 ; 29 41 ; 31 46 ; 32 47 ; 37 52 ; 41 57 ; 43 63 ; 47 68 ; 49 71 ; 53 77 ; 59 83 ; 61 88 ; 64 89 ; 67 95 ; 71 102 ; 73 108 ; 79 114 ; 81 116 ; 83 122 ; 89 129 ; 97 135 ; 101 142 ; 103 149 ; 107 155 ; 109 162 ; 113 169 ; 121 172 ; 125 175 ; 127 182 ; 128 183 ; 131 190 ; 137 197 ; 139 204 ; 149 211 ; 151 218 ; 157 226 ; 163 233 ; 167 240 ; 169 244 ; 173 251 ; 179 259 ; 181 266 ; 191 274 ; 193 282 ; 197 289 ; 199 297 ; 211 305 ; 223 312 ; 227 320 ; 229 328 ; 233 336 ; 239 344 ; 241 352 ; 243 353 ; 251 361 ; 256 362 ; 257 370 ; 263 378 ; 269 386 ; 271 395 ; 277 403 ; 281 411 ; 283 419 ; 289 423 ; 293 431 ; 307 439 ; 311 448 ; 313 456 ; 317 464 ; 331 473 ; 337 481 ; 343 484 ; 347 492 ; 349 501 ; 353 509 ; 359 518 ; 361 522 ; 367 531 ; 373 539 ; 379 548 ; 383 556 ; 389 565 ; 397 573 ; 401 582 ; 409 591 ; 419 599 ; 421 608 ; 431 617 ; 433 626 ; 439 634 ; 443 643 ; 449 652 ; 457 661 ; 461 670 ; 463 679 ; 467 687 ; 479 696 ; 487 705 ; 491 714 ; 499 723 ; 503 732 ; 509 741 ; 512 742 ; 521 751 ; 523 760 ; 529 765 ; 541 774 ; 547 783 ; 557 792 ; 563 801 ; 569 810 ; 571 820 ; 577 829 ; 587 838 ; 593 847 ; 599 856 ; 601 866 ; 607 875 ; 613 884 ; 617 893 ; 619 903 ; 625 905 ; 631 914 ; 641 924 ; 643 933 ; 647 942 ; 653 952 ; 659 961 ; 661 970 ; 673 980 ; 677 989 ; 683 999 ; 691 1008 ; 701 1017 ; 709 1027 ; 719 1036 ; 727 1046 ; 729 1047 ; 733 1057 ;

[nqxjypxr] Tunable urandom

On Linux, /dev/random blocks when high quality random bits are not available, but /dev/urandom continues to emit "only cryptographically strong" bits even if it hasn't been able to mix in entropy for a long time.

Wanted is something tunable in between: it blocks when the ratio of entropy in and bits out decreases below some user-specified threshold.

[dvaklhhl] Falsify the present

Trollishly create false records of the present, your present, so that future researchers and historians will puzzle over them or perhaps blindly accept them as truth.

Falsifying the present seems easier than falsifying the past, because the past requires access to historical records.

What kinds of false records will the future be most likely to be gullible to?  Perhaps ask current historians what types of records of the past they struggle to find or verify.

Coming soon will be significant climate change.  Record what life is like before climate change.

Tuesday, January 10, 2017

[sxisqanh] Is it live or is it Memorex?

Under what conditions is it preferable to consume or experience and event live instead of a sound or video recording?

The biggest difference of "live" is that it is social: you consume simultaneously with others around you.  When does that matter?  How does that matter? 

There are a few other technical differences that are rapidly being eliminated by technology, e.g., VR.

[fcpxjrfj] Opposition to death penalty

There are those who oppose the death penalty because it is too cruel.  There are others who oppose it because it is not cruel enough: lifetime imprisonment is believed crueler so better -- we have a very vindictive society.

In places which have abolished the death penalty, how much was the reason for abolishing it the latter reason?

Those who believe prison should be cruel provide the political force for mistreatment of prisoners by guards or by fellow inmates.  Stand idly by and let it happen.

[lavljmgv] Fixing leap seconds

The obvious solution to avoid time going backward at the leap second is to deploy a separate time standard which monotonically counts forward, so it is TAI except expressed as a number.  We will need a new set of system calls to get and set that number.  We probably need to augment NTP to distribute that number. 

Computers can internally count the new number, or internally count UTC as they currently do.  They can provide both system calls, converting between them depending on which system call is invoked and what they internally count.  New and updated software should use the new system call, but the old system call can be provided indefinitely for backward compatibility.  New systems should internally count the new number.

Smearing the leap second over (say) a day is less than ideal because time will disagree with computers not smearing.  It may also interfere with tasks needing a time interval to high precision.

The new number should be significantly different from the POSIX counter (Unix time) so that if one is accidentally substituted for the other, it will be obvious.  Previously, we proposed a fanciful 835-bit wide number, which avoids needing to encode floating point and will probably never need to worry about overflow.

There is something wrong about attempting to politically end leap seconds when this kind of non-disruptive software solution exists.

[wvrnupbd] Rogue planets

During the formation of a solar system, there are lots of planets which haven't organized themselves into stable orbits yet, so many planets probably get ejected.

At the end of its lifetime, a star undergoes mass loss which probably destabilizes planetary orbits (e.g., by breaking resonance structures) causing more planets to become ejected.

How dense is space with planets unattached to stars?  Also comets, of course.  Navigation hazard?

[qgppohzt] Brotli text generator

Invert the language model hardcoded into the Brotli compression algorithm to turn it into a random text generator.

Monday, January 09, 2017

[lvbetgkb] Right section of a function

A left section of a binary (two-argument) function is easy to write in Haskell using partial function application: just omit the last (right) argument.  A right section is a little bit more awkward, requiring backquotes, lambda, or flip.

import Data.Function((&));

-- example binary function (not an operator)
f2 :: a -> [a] -> [a];
f2 = (:);

-- we will use the larger functions later
f3 :: Int -> a -> [a] -> [a];
f3 _ = (:);

f4 :: Bool -> Int -> a -> [a] -> [a];
f4 _ _ = (:);

test :: [String];
test = map (\f -> f 'h') -- all of these evaluate 'h':("el"++"lo") yielding hello
[ (`f2` ("el" ++ "lo")) -- backquotes (grave accents) are inline operator syntax. An inline operator followed by an argument, all surrounded by parentheses, is operator right section syntax: one is supposed to imagine a hole in front of the backquotes: (__ `f2` ("el" ++ "lo"))
, (\arg1 -> f2 arg1 ("el" ++ "lo")) -- lambda syntax
, (\arg1 -> f2 arg1 $ "el" ++ "lo")
, ((flip f2) ("el" ++ "lo"))
, ((flip f2) $ "el" ++ "lo")
, (flip f2 $ "el" ++ "lo")
, (flip f2 ("el" ++ "lo")) -- It might be a little surprising that this one works, if one had thought of "flip" as a function taking only one argument, namely the function to be flipped. However, because of currying, it actually takes 3 arguments. flip :: (a -> b -> c) -> b -> a -> c.
, ("el" ++ "lo" & flip f2)

-- For these 3- and 4-argument cases, we would like to create a lambda on the penultimate argument.
-- , (`f3 (2 + 3)` ("el" ++ "lo")) -- This does not work because the contents of the backquotes must be a binary function that is a single token, not an expression.
, (let { t2 = f3 (2 + 3) } in (`t2` ("el" ++ "lo")))
, (\penultimate -> f3 (2 + 3) penultimate ("el" ++ "lo"))
, (\penultimate -> f3 (2 + 3) penultimate $ "el" ++ "lo") -- this wordy lambda syntax is one of the best in terms of low parenthesis count and avoiding deep parentheses nesting.
, (flip (f3 (2 + 3)) ("el" ++ "lo")) -- similar to "a little surprising" above
, (flip (f3 (2 + 3)) $ "el" ++ "lo")
, (flip (f3 $ 2 + 3) $ "el" ++ "lo")
, ((flip $ f3 (2 + 3)) $ "el" ++ "lo")
, ((flip $ f3 $ 2 + 3) $ "el" ++ "lo")
, ("el" ++ "lo" & (f3 (2 + 3) & flip))
, ("el" ++ "lo" & (2 + 3 & f3 & flip))

, (\penultimate -> f4 (not True) (2 + 3) penultimate ("el" ++ "lo"))
, (\penultimate -> f4 (not True) (2 + 3) penultimate $ "el" ++ "lo")
, (let { t2 = f4 (not True) (2 + 3) } in (`t2` ("el" ++ "lo")))
, (flip (f4 (not True) (2 + 3)) ("el" ++ "lo"))
, (flip (f4 (not True) (2 + 3)) $ "el" ++ "lo")
, ((flip $ f4 (not True) (2 + 3)) $ "el" ++ "lo")
, ((flip $ f4 (not True) $ 2 + 3) $ "el" ++ "lo")
, ("el" ++ "lo" & (f4 (not True) (2 + 3) & flip))
, ("el" ++ "lo" & (2 + 3 & f4 (not True) & flip))
, ("el" ++ "lo" & (2 + 3 & (not True & f4) & flip))
];

(\f -> f 'h') could have been written ($ 'h') , a right section itself, but we deliberately avoid being potentially obscure in the test harness.

Friday, January 06, 2017

[kzpybzhu] Paragraph separators in Gmail android app

Write two paragraphs of text as a message on different phones and examine the HTML that actually gets sent.  (Easiest done at the receiving end.). There is something terribly wrong with how the newer Nexus 5x does it.

This affects blog-by-email for this blog, and makes GnuPG unable to parse PGP ASCII armor.

<div dir="auto">Nexus<div dir="auto"><br>
</div><div dir="auto">5x</div></div> 

Android version: 7.0
Android security patch level: November 5, 2016
Baseband version: M8994F-2.6.33.2.14
Kernel version: 3.10.73-g5a3d8a9
Build number: N5D91L

Gmail
version 6.11.6.140557227.release

Google Keyboard
version 5.1.23.127065177-arm64-v8a

<p dir="ltr">Galaxy</p>  <p dir="ltr">Nexus</p> 

Android version 4.2.2
Baseband version I515.10 V.FK01 / I515.FK02
Kernel version 3.0.31-g9f818de
Build number JDQ39

Gmail
version 6.11.6.140557227.release

Google Keyboard
version 5.1.23.127065177-armeabi-v7a