Test

SMPTE color bars – Click for your own test pattern kit

This page is for posters to test comments prior to submitting them to WUWT. Your tests will be deleted in a while, though especially interesting tests, examples, hints, and cool stuff will remain for quite a while longer.

Some things that don’t seem to work any more, or perhaps never did, are kept in Ric Werme’s Guide to WUWT.

Formatting in comments

WordPress does not provide much documentation for the HTML formatting permitted in comments. There are only a few commands that are useful, and a few more that are pretty much useless.

A typical HTML formatting command has the general form of <name>text to be formatted</name>. A common mistake is to forget the end command. Until WordPress gets a preview function, we have to live with it.

N.B. WordPress handles some formatting very differently than web browsers do. A post of mine shows these and less useful commands in action at WUWT.

N.B. You may notice that the underline command, <u>, is missing. WordPress seems to suppress for almost all users, so I’m not including it here. Feel free to try it, don’t expect it to work.

Name Sample Result
b (bold) This is <b>bold</b> text This is bold text
Command strong also does bolding.
i (italics) This is <i>italicized</i> text This is italicized text
Command em (emphasize) also does italics.
a (anchor) See <a href=http://wermenh.com>My home page</a> See My home page
A URL by itself (with a space on either side) is often adequate in WordPress. It will make a link to that URL and display the URL, e.g. See http://wermenh.com.

Some source on the web is presenting anchor commands with other parameters beyond href, e.g. rel=nofollow. In general, use just href=url and don’t forget the text to display to the reader.

blockquote (indent text) My text
<blockquote>quoted text</blockquote>
More of my text
My text

quoted text

More of my text

Quoted text can be many paragraphs long.
WordPress italicizes quoted text (and the <i> command enters normal text).
strike This is <strike>text with strike</strike> This is text with strike
pre (“preformatted” – use for monospace display) <pre>These lines are bracketed<br>with &lt;pre> and &lt;/pre>
These lines are bracketed
with <pre> and </pre>
Preformatted text, generally done right. Use it when you have a table or something else that will look best in monospace. Each space is displayed, something that <code> (next) doesn’t do.
code (use for monospace display) <code>Wordpress handles this very differently</code> WordPress handles this very differently
See http://wattsupwiththat.com/resources/#comment-65319 to see what this really does.

Youtube videos

Using the URL for a YouTube video creates a link like any other URL. However, WordPress accepts the HTML for “embedded” videos. From the YouTube page after the video finishes, click on the “embed” button and it will suggest HTML like:

<iframe width="560" height="315"
        src="http://www.youtube.com/embed/yaBNjTtCxd4"
        frameborder="0" allowfullscreen>
</iframe>

WordPress will convert this into an internal square bracket command, changing the URL and ignoring the dimension. You can use this command yourself, and use its options for dimensions. WordPress converts the above into something like:

[youtube https://www.youtube.com/watch?v=yaBNjTtCxd4&w=640&h=480]

Use this form and change the w and h options to suit your interests.

Images in comments

If WordPress thinks a URL refers to an image, it will display the image
instead of creating a link to it. The following rules may be a bit excessive,
but they should work:

  1. The URL must end with .jpg, .gif, or .png. (Maybe others.)
  2. The URL must be the only thing on the line.
  3. This means you don’t use <img>, which WordPress ignores and displays nothing.
  4. This means WordPress controls the image size.
  5. <iframe> doesn’t work either, it just displays a link to the image.

If you have an image whose URL doesn’t end with the right kind of prefix, there may be two options if the url includes attributes, i.e. if it has a question mark followed by attribute=value pairs separated by ampersands.

Often the attributes just provide information to the server about the source of the URL. In that case, you may be able to just delete everything from the question mark to the end.

For some URLs, e.g. many from FaceBook, the attributes provide lookup information to the server and it can’t be deleted. Most servers don’t bother to check for unfamiliar attributes, so try appending “&xxx=foo.jpg”. This will give you a URL with one of the extensions WordPress will accept.

WordPress will usually scale images to fit the horizontal space available for text. One place it doesn’t is in blockquoted text, there it seems to display fullsize and large images overwrite the rightside nav bar text.

Special characters in comments

Those of us who remember acceptance of ASCII-68 (a specification released in 1968) are often not clever enough to figure out all the nuances of today’s international character sets. Besides, most keyboards lack the keys for those characters, and that’s the real problem. Even if you use a non-ASCII but useful character like ° (as in 23°C) some optical character recognition software or cut and paste operation is likely to change it to 23oC or worse, 230C.

Nevertheless, there are very useful characters that are most reliably entered as HTML character entities:

Type this To get Notes
&amp; & Ampersand
&lt; < Less than sign
Left angle bracket
&bull; Bullet
&deg; ° Degree (Use with C and F, but not K (kelvins))
&#8304;
&#185;
&#178;
&#179;
&#8308;

¹
²
³
Superscripts (use 8304, 185, 178-179, 8308-8313 for superscript digits 0-9)
&#8320;
&#8321;
&#8322;
&#8323;



Subscripts (use 8320-8329 for subscript digits 0-9)
&pound; £ British pound
&ntilde; ñ For La Niña & El Niño
&micro; µ Mu, micro
&plusmn; ± Plus or minus
&times; × Times
&divide; ÷ Divide
&ne; Not equals
&nbsp; Like a space, with no special processing (i.e. word wrapping or multiple space discarding)
&gt; > Greater than sign
Right angle bracket
Generally not needed

Various operating systems and applications have mechanisms to let you directly enter character codes. For example, on Microsoft Windows, holding down ALT and typing 248 on the numeric keypad may generate the degree symbol. I may extend the table above to include these some day, but the character entity names are easier to remember, so I recommend them.

Latex markup

WordPress supports Latex. To use it, do something like:

$latex P = e\sigma AT^{4}$     (Stefan-Boltzmann's law)

$latex \mathscr{L}\{f(t)\}=F(s)$

to produce

P = e\sigma AT^{4}     (Stefan-Boltzmann’s law)

\mathscr{L}\{f(t)\}=F(s)

Linking to past comments

Each comment has a URL that links to the start of that comment. This is usually the best way to refer to comment a different post. The URL is “hidden” under the timestamp for that comment. While details vary with operating system and browser, the best way to copy it is to right click on the time stamp near the start of the comment, choose “Copy link location” from the pop-up menu, and paste it into the comment you’re writing. You should see something like http://wattsupwiththat.com/2013/07/15/central-park-in-ushcnv2-5-october-2012-magically-becomes-cooler-in-july-in-the-dust-bowl-years/#comment-1364445.

The “#<label>” at the end of the URL tells a browser where to start the page view. It reads the page from the Web, searches for the label and starts the page view there. As noted above, WordPress will create a link for you, you don’t need to add an <a> command around it.

One way to avoid the moderation queue.

Several keywords doom your comment to the moderation queue. One word, “Anthony,” is caught so that people trying to send a note to Anthony will be intercepted and Anthony should see the message pretty quickly.

If you enter Anthony as An<u>th</u>ony, it appears to not be caught,
so apparently the comparison uses the name with the HTML within it and
sees a mismatch.

Advertisements

321 thoughts on “Test

  1. I just had another thought about underlines.

    I think I discovered that if I could get around the automatic spam trap by writing Anthony with an empty HTML command inside, e.g. Ant<b></b>hony .

    What happens when I try that with underline?

    Apologies in advance to the long-suffering mods, at least one of these comments may get caught by the spam trap.

    • I remember seeing this test pattern on TV late at night after the National Anthem and before the local station broadcast came on early in the morning while the biscuits, bacon and oatmeal were still cooking. The first show after a weather report was “Dialing For Dollars” and you had better know the count when your phone rang…. 1 up and 3 down… to get the cash.

      • He used the <pre> command, it’s described in the main article. Pre is for preformatted text and displays in monospace and with all the spaces preserved.

      • Source			Energy (J)	Normalized
        Atmosphere:		1.45x10^22 J		1
        Ice				1.36x10^25 J		935
        Oceans			1.68x10^25 J		1,157
      • Source          Energy (J)          Normalized
        Atmosphere:     1.45x10^22 J              1
        Ice:            1.36x10^25 J            935
        Oceans:         1.68x10^25 J          1,157
      • Source          Energy (J)          Normalized (E)
        Atmosphere:     1.45x10^22 J              1 J
        Ice:            1.36x10^25 J            935 J
        Oceans:         1.68x10^25 J          1,157 J
      • In my previous post I use the example of the following over the next 100 years: 3 units of new energy goes to the oceans and 1 unit to the atmosphere – with all 4 units being equal in Joules. 1 unit raises the average temperature of the atmosphere by 4C or the average temperature of the oceans by 0.0003C. In this example the atmosphere warms by 4C and the oceans warm by 4 x 0.0003C or 0.0012C. It is exactly the higher heat capacity you mention that allows the heat energy to be absorbed with less movement of temperature. At the detail level maybe the top 2 inches of water gets much hotter and this will then support the physics of the more complex mechanisms you mention. But the beauty of this approach (I think – and hope) is that it doesn’t really matter how the energy gets distributed in the water with its corresponding temperature effect. Determine the mass of the ocean water you want to see affected in this model and apply the energy to it to get the temperature you would expect.

  2. WordPress only displays images for URLs on their own line and ending with a image file extension. If I delete the attribute string above, i.e. ?token=I7JQbQli1swRgik%2BKnIKAmCk52Y%3D then what’s left should work:

    • Now one that would permit image display:

      Update: Right clicking to get the image’s url gave me a URL that goes through WP’s cache via (slashes replaced by spaces, periods by dashes) i2-wp-com wermenh-com images winter0708 P3020227_snowbank7-jpg

    • Now just the image without a suffix:

      Update: This image uses the same URL as the previous cached image. That means we can’t use a changing suffix to force a trip around the cache any more for HTTP images. I’ll play with HTTPS later.

      • Reply to Ric W ==> Thanks — I was fielding comments on an essay using an unfamiliar tablet, and wasn’t sure which and/or both were part of HTML5. I usually use the old ClimateAudit comment Greasemonkey tool, even though its formatting is funky these days, for the tags. Don’t suppose you could update that add-in?

      • IIRC, Greasemonkey was written for CA, which uses a different theme that does WUWT.

        I don’t have the time to figure out the JavaScript code or whatever it’s written in, and I don’t have the ability to make changes that deep in WUWT.

        Instead of Greasemonkey, I often use https://addons.mozilla.org/en-US/firefox/addon/its-all-text/ . It can open up an external editor, so it has saved my butt a few times when WP loses a post I was making.

  3. Hey, what happened to the old smiley face?? When I tried to post it, this appeared:

    I wonder if WordPress changed any others?

     ☹ ☻

    The old smiley was more subtle; less in-your-face. The new one is way too garish.

    If WP keeps that up, I’ll just have to use this lame replacement:

    :-)

    Or even worse:

    ;-)

  4. It’s so hard to get a consistent story here. We have an article saying, accurately, “#ExxonKnew ?? meh… #JohnsonKnew”. Yes, what Exxon knew in 1977 was just the conventional scientific understanding. And it wasn’t about imminent cooling.

    I’m hearing you loud and clear but I’d just like to have you on record.

    Nick, are we in the midst of man-made climate warming right now – imminent or otherwise – as we speak, this day of our Lord AD 2017, according to the worlds eminent scientists! Or is it all a media beat up?

    A yes or no answer will do.

    Again, to be as clear as possible, I’ll ask the question again*:

    Is it “the conventional scientific understanding” today**, that the world is actually being threatened by man-made global warming?

    Simple question, I’m sure you can answer!

    *Answer both separately if you feel the rephrasing might change your answer.
    **November 2017

  5. This site is a plethora of theories and assumptions on the matter.
    Never has I seen such propaganda. Your information is less then accurate to say the least.
    FYI, There is nothing we can do about climate change. The government has been chem spraying for decades. aluminum barium etc.. These are to deflect sunshine back into space. and dumb us down, causes cancer ect… They are WRONG>
    FYI our sun is beginning a new cycle. This cycle is a every + or – 360 year event that cools the earth considerably. Much like a mini ice age. First temps spike then drop and drop considerably Do you home work over there. The sun dictates everything here on earth. duh, There is no stopping it. Sun giving off less energy the earths magnetic field adjusts and we get a climate change. FACT!!!!!!!!!!!!!!! Stop the B.S. and go back to school. Government B.S. to make a buck. Despicable website chalk full of subversive agendas. So sad. India and china are monster contributors to pollution and this site and the Paris agreement do NOTHING to curb there output. Your all B.S. Admit it… this site is a scam. LOL
    My BEST,
    GOD

    [????? .mod]

  6. Anthony,

    I’m trying to figure out why my posts (comments) never appear on you board. What am I doing wrong?

    Kevin

    [Looks like they’re getting caught by WordPress’s IP blacklist filter. Happens all the time to otherwise innocuous posters. All we can really recommend is to keep trying, and try dropping a note to the mods. Even if it gets “trashed” by WP, we can find and rescue the comment. -mod]

  7. I find that the trend is borderline significant now. – Nick Stokes

    And thus, Nick Stokes – the mathematician – sells his sole as a politician.

    Borderline and significant, now that is a wonderful construct! I’m hearing you Nick, I’ve got some land I’d like to sell, its swamp land but it is borderline prime! You know what I mean, its trending dry its just a little wet! But the trend is is significant, you can bet your house on it! ;-)

  8. Between significant and not significant, the gods have placed, as they so often do, a border region.

    – Nick Stokes

    Sure enough Nick but why should we assume that such a border would necessarily be smooth, flat, level or linear! ;-)

  9. Between significant and not significant, the gods have placed, as they so often do, a border region. – Nick Stokes

    Sure enough Nick but why should we assume that such a border would necessarily equate to the smooth, the average, the flat, the level or the linear!* ;-)

    * Scale, being the key word here!

  10. –> Ray in SC December 5, 2017 at 3:37 pm said:

    Scott,
    Nick gave an informative response to a question and, in doing so, has added much more to the discussion than your disparaging remark.

    No, he spouted about a statistical test that sounds well and good but he failed to point out, that sample size is key and that t-tests are unusual or unreliable when the sample size is low. And that is the point at issue here. Calculating the probability of a null hypothesise on too small as dataset is worse than no significance testing at all!

    There is a false confidence built it this specific statistic that favours trust in the test itself. For example, the smaller the dataset, the probability that the sample will be further away from the null hypothesis is greater even when the null hypothesis is true.

    However, these statistical results have absolutely nothing to do with reality!* Even when you understand it, the theory – which I actually also happen to love – was originally termed “Experimental Probability” because what happens in the real world can not be incapsulated. Every moment, every action, is an experiment. Very large numbers of these “experiments” will approach the “theory”, but that is all we have; to-date!

    We – many of us here – keep coming back to this argument about the validity of probability distribution and its application in the “real world”.

    *Okay, they do have something to do with reality, I concede that argument but the connection is not coincident with the particular point I’m trying to make here. ;-)

  11. In contrast, we use the following code.

    plot(NA, xlim = range(0, -ab[[M]][1, 1] / ab[[M]][1, 2]), 
         ylim = range(0, ab[[M]][1, 1] * 2 / k.B), xlab = "Altitude (Meters)",
         ylab = "Temperature (Kelvins)", 
         main = "Temperature vs. Altitude for\nDifferent Molecule Masses")
    grid()
    for(i in 1:M) lines(c(0, -ab[[M]][i, 1] / ab[[M]][i, 2]), 
                        c(ab[[M]][i, 1] * 2 / k.B, 0),
                        col = i, lwd = 2)   
    legend("topright", bty = "n", lty = 1, lwd = 2, col = 1:M,
           legend = paste(round(m / amu), "AMUs"), title = "Particle Mass")
    

    Note in the following

    [The mods appreciate your effort in testing your code here, but, no we don’t understand it either. 8<) .mod]

  12. The column of gas used in Brown’s scenario is constrained laterally and so leads only to a linear decline in density and pressure with height which does not properly reflect the real world scenario.

    Actually, no. As Coombes and Laue demonstrated, a uniform temperature is entirely consistent with an exponential pressure reduction

  13. Unfortunately, except for one post I slipped in while he was on sabbatical, he has declined all mine ever since I failed to exhibit enough deference to Christopher Monckton’s erudition.

  14. Sorry,

    w.

    Don’t be. Your post provided a lot of insight, which I appreciate. As to your failing to comprehend the shortcomings of your “proof,” I completely understand that not everyone is comfortable with starting from first principles to re-examine his beliefs; each of us has his respective limitations.

    For the benefit of readers with a somewhat broader perspective, though, I’ll explain it thus:

    If you simulate a monatomic gas comprising two constituents, one consisting of N/2 molecules of mass m and another consisting of N/2 molecules of mass 2m, all randomly traveling in one dimension for the sake of simplicity subject to a gravitational field and among them having total (kinetic + potential) energy NE_{avg}, what you find after a long period of “thermalization” is that at altitude NE_{avg}/2mg the first constituent’s average molecular kinetic energy is \frac{NE_{avg}}{2(3N-2)}, while the second constituent’s is zero. (By “what you find” I mean what you find after averaging over a long time; variances are so great that over short time periods these averages not repeatably undetectable. I had attempted to get a post published here that would explain this in more detail, but apparently I’ve become persona non grata here since I disputed Christopher Monckton’s bizarre mathematics.)

    In any event, those energies translate to respective temperatures at that altitude of k_B\frac{NE_{avg}}{3N-2} and absolute zero for the different constituents, where k_B is Boltzmann’s constant. You will also find that both constituents have the same average temperature, 2k_BNE_{avg}/(3N-2), at altitude zero and that both temperatures change linearly with altitude: both lapse rates are non-zero, but, since the constituents are both at equilibrium there is by definition no average heat flow.

    In other words, the equilibrium temperatures at the higher altitude are different for the different constituents, the different constituents have different lapse rates, and these values’ averages over long periods persist even though the constituents are intimately mixed, but despite their temperature differences and their intimate mixing no net heat flows on average between them.

    Moreover, you’d find that each constituent by itself would exhibit a lapse rate twice the value it exhibits when the two are mixed.

    Now, a silver wire is not the same as a gas, and mixing two gas components together is not the same as coupling a silver wire to a gas column. But adding the second constituent to the first reduces the original first’s lapse rate by removing the constraint that the first constituent’s total energy remain fixed. Since coupling the gas column to the silver wire removes a similar constraint from the gas column, we are entitled to question the silver-wire proof’s assumption that coupling the silver wire to the gas column would leave the latter’s lapse rate unchanged.

    Moreover, since the added constituent adopts a lapse rate different from that of the original constituent—and since that difference persists despite the constituents’ being intimately mixed—it’s not self-evident that thermal coupling would cause the silver wire’s temperature difference to equal the gas column’s. Nor, since the different gas constituents’ lapse rates cause no heat flow down their respective temperature gradients, can we conclude that whatever temperature gradient prevails at equilibrium in the silver wire would necessarily cause heat to flow within it.

    In short, although what we think we know about Fourier’s law would seem to dictate that any temperature gradient at all would cause some heat to flow through a heat-conductive medium, we find if we reason from first principles that gravity modifies that conclusion. Before we apply a physical law, that is, it’s important to know the assumptions on which it is based.

  15. g <- 9.8 # Gravitational acceleration, m/sec^2
    k.B <- 1.38064852E-23 # Boltzmann's constant, J/K
    amu <- 1.660539040E-27 # Atomic mass unit, kg
    T <- 288 # Temperature, K
    
    gas <- function(m, z0, v0, t, collision.prob = 0.5){
      N <- length(m)
      if(length(z0) != N | length(v0) != N) 
        stop("m, z0, and v0 must be the same length")
      g <- 9.8
      z1 <- z0
      v1 <- v0
      t1 <- t[1]
      v <- z <- matrix(nrow = N, ncol = length(t))
      repeat{
        #  Decide whether (provisionally) to allow collision and, if collision would
        #  be allowed, which molecules would collide and when:
        tc <- Inf
        colliders  1){
          for(i in 1:(N - 1)){
            for(j in (i + 1):N){
              if(collision.prob > runif(1)){
                tc.i <- t1 - diff(z1[c(i, j)]) / diff(v1[c(i, j)])
                if(tc.i <= t1) next
                if(tc.i < tc){
                  tc <- tc.i
                  colliders <- matrix(c(i, j), nrow = 1)
                }else if(tc.i == tc){
                  colliders <- rbind(colliders, c(i, j))
                }
              }
            }
          }
        }
        
        # Determine provisional bounce time and which molecules would bounce 
        tbs <- numeric(N)
        for(i in 1:N) tbs[i] <- max(Re(polyroot(c(z1[i], v1[i], -g / 2))))
        tb <- min(tbs)  
        bouncers <- which(tbs == tb)
        tb <- tb + t1
        
        #  End of current interval is earlier of provisional collision and bounce
        #  times:
        t2 <- min(tc, tb)
        interval = t1 & t < t2)
        
        # Current interval's position and and velocity curves
        z[, interval] <- z1 + v1 %*% t((t[interval] - t1)) + 
          rep(-g, N) %*% t((t[interval] - t1) ^ 2 / 2)
        v[, interval] <- v1 + rep(-g, N) %*% t((t[interval] - t1))
        
        # for(i in 1:N) lines(t[interval], z[i, interval], col = i, lty = 3, lwd = 3)
        
        # Next interval's initial conditions:
        z1 <- z1 + v1 * (t2 - t1) - g * (t2 - t1) ^ 2 / 2
        v1 <- v1 - g * (t2 - t1)
        
        #  Implement collisions or bounces, whichever would come first:
        if(tc < tb){
          vc <- v1
          for(i in 1:dim(colliders)[1]){
            z1[colliders[i,]] <- rep(mean(z1[colliders[i,]]), 2)
            v1[colliders[i, 1]] <- 
              ((-diff(m[colliders[i,]])) * vc[colliders[i, 1]] +
                 2 * m[colliders[i, 2]] * vc[colliders[i, 2]]) / 
              sum(m[colliders[i,]])
            v1[colliders[i, 2]] <- v1[colliders[i, 1]] - diff(vc[colliders[i,]])
          }
        }else{
          v1[bouncers] <- -v1[bouncers]
        }
        t1 <- t2
        if(t[length(t)] < t1) break
      }
      list(t = t, z = z, v = v, K = 1/2 * m * v ^ 2)
    }
    
    
    # HERE'S WHAT THE TRAJECTORIES LOOK LIKE
    initial.conditions <- function(N, m = NA, T = 288){
      if(missing(m)){
        m <- seq(24.43433, 48.86866, length.out = N) * amu
      }else{
        if(length(m) != N) stop("Length of m must be N")
      } 
      E.avg <- 3/2 * k.B * T  # Energy per molecule in one dimension
      E <- N * E.avg * (r <- runif(N)) / sum(r)
      v0 <- sign(runif(N) - 0.5) * sqrt(2 * (KE <- runif(N) * E) / m)
      z0 <- (E - KE) / m / g
      list(m = m, z0 = z0, v0 = v0)
    }
    
    N <- 4  # Number of molecules
    t <- seq(0, 200, 0.1)
    
    inits <- initial.conditions(N)
    m <- inits$m
    z0 <- inits$z0
    v0 <- inits$v0
    trial <- gas(m, z0, v0, t, 0.75)
    plot(NA, xlim = range(t), ylim = range(trial$z), xlab = "Time (Seconds)",
         ylab = "Altitude (Meters)", 
         main = paste(N, "-Particle-Gas Motion in One Dimension", sep = ""))
    grid()
    for(i in 1:N) lines(t, trial$z[i,], col = i, lwd = 2)
    
    
    #  TO TAKE STATISTICS, WE GENERATE LONG RECORDS, WITH DIFFERENT NUMBERS OF
    #  MOLECULES
    t <- 0:1000000
    M <- 5
    trials <- ab <- list()
    ab[[1]] <- matrix(c(3/2 * k.B * T, -m[1] * g), nrow = 1)
    for(N in 2:M){
        inits <- initial.conditions(N)
      m <- inits$m
      z0 <- inits$z0
      v0 <- inits$v0
      trials[[N]] <- gas(m, z0, v0, t, 0.75)
      ab[[N]] <- matrix(nrow = N, ncol = 2)
      for(i in 1:N) ab[[N]][i,] <- 
        lm(trials[[N]]$K[i,] ~ trials[[N]]$z[i,])$coefficients
    }
    
    #  Plot the lapse rate of the lightest molecule in each trial
    plot(NA, xlim = range(0, -ab[[M]][1, 1] / ab[[M]][1, 2]), 
         ylim = range(0, ab[[2]][1, 1] * 2 / k.B), xlab = "Altitude (Meters)",
         ylab = "Temperature (Kelvins)", 
         main = "Temperature vs. Altitude for\nDifferent System Sizes")
    grid()
    for(i in 1:M) lines(c(0, -ab[[i]][1, 1] / ab[[i]][1, 2]), 
                        c(ab[[i]][1, 1] * 2 / k.B, 0),
                        col = i, lwd = 2)   
    legend("topright", bty = "n", lty = 1, lwd = 2, col = 1:M,
           legend = paste(1:M, "-Molecule System", sep = ""))
    
    #  Compute and plot altitude histograms
    zmax <- dmax <- 0
    histo <- list()
    for(i in 2:M){
      histo[[i]] <- hist(trials[[i]]$z[1,], plot = FALSE)
      dmax <- max(dmax, histo[[i]]$density)
      zmax <- max(zmax, histo[[i]]$breaks)
    }
    plot(NA, xlim = c(0, zmax), ylim = c(0, dmax), xlab = "Altitude (Meters)",
         ylab = "Probability Density (/Meter)", 
         main = "Molecule-Presence Probability\nDensity as Function of Altitude")
    grid()
    for(i in M:2) lines(histo[[i]]$mids, histo[[i]]$density, col = i, lwd = 2)
    legend("topright", col = 2:M, lty = 1, lwd = 2, bty = "n",
           legend = paste(2:M, "-Molecule System", sep = ""))
    
    #  Determine ratios of lapse rate to weight
    simulation.ratio <- numeric(M)
    for(i in 1:M) simulation.ratio[i] <- 
      -initial.conditions(2)$m[1] * g / ab[[i]][1, 2]
    theoretical.ratio <- 3 * (1:M) - 2
    rbind(theoretical.ratio, simulation.ratio)
    
    #  Plot the lapse rate of every molecule in the last trial
    plot(NA, xlim = range(0, -ab[[M]][1, 1] / ab[[M]][1, 2]), 
         ylim = range(0, ab[[M]][1, 1] * 2 / k.B), xlab = "Altitude (Meters)",
         ylab = "Temperature (Kelvins)", 
         main = "Temperature vs. Altitude for\nDifferent Molecule Masses")
    grid()
    for(i in 1:M) lines(c(0, -ab[[M]][i, 1] / ab[[M]][i, 2]), 
                        c(ab[[M]][i, 1] * 2 / k.B, 0),
                        col = i, lwd = 2)   
    legend("topright", bty = "n", lty = 1, lwd = 2, col = 1:M,
           legend = paste(round(m / amu), "AMUs"), title = "Particle Mass")
    
  16. g <- 9.8 # Gravitational acceleration, m/sec^2
    k.B <- 1.38064852E-23 # Boltzmann's constant, J/K
    amu <- 1.660539040E-27 # Atomic mass unit, kg
    T <- 288 # Temperature, K

    gas <- function(m, z0, v0, t, collision.prob = 0.5){
    N <- length(m)
    if(length(z0) != N | length(v0) != N)
    stop("m, z0, and v0 must be the same length")
    g <- 9.8
    z1 <- z0
    v1 <- v0
    t1 <- t[1]
    v <- z <- matrix(nrow = N, ncol = length(t))
    repeat{
    # Decide whether (provisionally) to allow collision and, if collision would
    # be allowed, which molecules would collide and when:
    tc <- Inf
    colliders 1){
    for(i in 1:(N – 1)){
    for(j in (i + 1):N){
    if(collision.prob > runif(1)){
    tc.i <- t1 – diff(z1[c(i, j)]) / diff(v1[c(i, j)])
    if(tc.i <= t1) next
    if(tc.i < tc){
    tc <- tc.i
    colliders <- matrix(c(i, j), nrow = 1)
    }else if(tc.i == tc){
    colliders <- rbind(colliders, c(i, j))
    }
    }
    }
    }
    }

    # Determine provisional bounce time and which molecules would bounce
    tbs <- numeric(N)
    for(i in 1:N) tbs[i] <- max(Re(polyroot(c(z1[i], v1[i], -g / 2))))
    tb <- min(tbs)
    bouncers <- which(tbs == tb)
    tb <- tb + t1

    # End of current interval is earlier of provisional collision and bounce
    # times:
    t2 <- min(tc, tb)
    interval = t1 & t < t2)

    # Current interval's position and and velocity curves
    z[, interval] <- z1 + v1 %*% t((t[interval] – t1)) +
    rep(-g, N) %*% t((t[interval] – t1) ^ 2 / 2)
    v[, interval] <- v1 + rep(-g, N) %*% t((t[interval] – t1))

    # for(i in 1:N) lines(t[interval], z[i, interval], col = i, lty = 3, lwd = 3)

    # Next interval's initial conditions:
    z1 <- z1 + v1 * (t2 – t1) – g * (t2 – t1) ^ 2 / 2
    v1 <- v1 – g * (t2 – t1)

    # Implement collisions or bounces, whichever would come first:
    if(tc < tb){
    vc <- v1
    for(i in 1:dim(colliders)[1]){
    z1[colliders[i,]] <- rep(mean(z1[colliders[i,]]), 2)
    v1[colliders[i, 1]] <-
    ((-diff(m[colliders[i,]])) * vc[colliders[i, 1]] +
    2 * m[colliders[i, 2]] * vc[colliders[i, 2]]) /
    sum(m[colliders[i,]])
    v1[colliders[i, 2]] <- v1[colliders[i, 1]] – diff(vc[colliders[i,]])
    }
    }else{
    v1[bouncers] <- -v1[bouncers]
    }
    t1 <- t2
    if(t[length(t)] < t1) break
    }
    list(t = t, z = z, v = v, K = 1/2 * m * v ^ 2)
    }

    # HERE'S WHAT THE TRAJECTORIES LOOK LIKE
    initial.conditions <- function(N, m = NA, T = 288){
    if(missing(m)){
    m <- seq(24.43433, 48.86866, length.out = N) * amu
    }else{
    if(length(m) != N) stop("Length of m must be N")
    }
    E.avg <- 3/2 * k.B * T # Energy per molecule in one dimension
    E <- N * E.avg * (r <- runif(N)) / sum(r)
    v0 <- sign(runif(N) – 0.5) * sqrt(2 * (KE <- runif(N) * E) / m)
    z0 <- (E – KE) / m / g
    list(m = m, z0 = z0, v0 = v0)
    }

    N <- 4 # Number of molecules
    t <- seq(0, 200, 0.1)

    inits <- initial.conditions(N)
    m <- inits$m
    z0 <- inits$z0
    v0 <- inits$v0
    trial <- gas(m, z0, v0, t, 0.75)
    plot(NA, xlim = range(t), ylim = range(trial$z), xlab = "Time (Seconds)",
    ylab = "Altitude (Meters)",
    main = paste(N, "-Particle-Gas Motion in One Dimension", sep = ""))
    grid()
    for(i in 1:N) lines(t, trial$z[i,], col = i, lwd = 2)

    # TO TAKE STATISTICS, WE GENERATE LONG RECORDS, WITH DIFFERENT NUMBERS OF
    # MOLECULES
    t <- 0:1000000
    M <- 5
    trials <- ab <- list()
    ab[[1]] <- matrix(c(3/2 * k.B * T, -m[1] * g), nrow = 1)
    for(N in 2:M){
    inits <- initial.conditions(N)
    m <- inits$m
    z0 <- inits$z0
    v0 <- inits$v0
    trials[[N]] <- gas(m, z0, v0, t, 0.75)
    ab[[N]] <- matrix(nrow = N, ncol = 2)
    for(i in 1:N) ab[[N]][i,] <-
    lm(trials[[N]]$K[i,] ~ trials[[N]]$z[i,])$coefficients
    }

    # Plot the lapse rate of the lightest molecule in each trial
    plot(NA, xlim = range(0, -ab[[M]][1, 1] / ab[[M]][1, 2]),
    ylim = range(0, ab[[2]][1, 1] * 2 / k.B), xlab = "Altitude (Meters)",
    ylab = "Temperature (Kelvins)",
    main = "Temperature vs. Altitude for\nDifferent System Sizes")
    grid()
    for(i in 1:M) lines(c(0, -ab[[i]][1, 1] / ab[[i]][1, 2]),
    c(ab[[i]][1, 1] * 2 / k.B, 0),
    col = i, lwd = 2)
    legend("topright", bty = "n", lty = 1, lwd = 2, col = 1:M,
    legend = paste(1:M, "-Molecule System", sep = ""))

    # Compute and plot altitude histograms
    zmax <- dmax <- 0
    histo <- list()
    for(i in 2:M){
    histo[[i]] <- hist(trials[[i]]$z[1,], plot = FALSE)
    dmax <- max(dmax, histo[[i]]$density)
    zmax <- max(zmax, histo[[i]]$breaks)
    }
    plot(NA, xlim = c(0, zmax), ylim = c(0, dmax), xlab = "Altitude (Meters)",
    ylab = "Probability Density (/Meter)",
    main = "Molecule-Presence Probability\nDensity as Function of Altitude")
    grid()
    for(i in M:2) lines(histo[[i]]$mids, histo[[i]]$density, col = i, lwd = 2)
    legend("topright", col = 2:M, lty = 1, lwd = 2, bty = "n",
    legend = paste(2:M, "-Molecule System", sep = ""))

    # Determine ratios of lapse rate to weight
    simulation.ratio <- numeric(M)
    for(i in 1:M) simulation.ratio[i] <-
    -initial.conditions(2)$m[1] * g / ab[[i]][1, 2]
    theoretical.ratio <- 3 * (1:M) – 2
    rbind(theoretical.ratio, simulation.ratio)

    # Plot the lapse rate of every molecule in the last trial
    plot(NA, xlim = range(0, -ab[[M]][1, 1] / ab[[M]][1, 2]),
    ylim = range(0, ab[[M]][1, 1] * 2 / k.B), xlab = "Altitude (Meters)",
    ylab = "Temperature (Kelvins)",
    main = "Temperature vs. Altitude for\nDifferent Molecule Masses")
    grid()
    for(i in 1:M) lines(c(0, -ab[[M]][i, 1] / ab[[M]][i, 2]),
    c(ab[[M]][i, 1] * 2 / k.B, 0),
    col = i, lwd = 2)
    legend("topright", bty = "n", lty = 1, lwd = 2, col = 1:M,
    legend = paste(round(m / amu), "AMUs"), title = "Particle Mass")

  17. g = 9.8 # Gravitational acceleration, m/sec^2
    k.B = 1.38064852E-23 # Boltzmann's constant, J/K
    amu = 1.660539040E-27 # Atomic mass unit, kg
    T = 288 # Temperature, K
    
    gas = function(m, z0, v0, t, collision.prob = 0.5){
      N = length(m)
      if(length(z0) != N | length(v0) != N) 
        stop("m, z0, and v0 must be the same length")
      g = 9.8
      z1 = z0
      v1 = v0
      t1 = t[1]
      v = z = matrix(nrow = N, ncol = length(t))
      repeat{
        #  Decide whether (provisionally) to allow collision and, if collision would
        #  be allowed, which molecules would collide and when:
        tc = Inf
        colliders = NULL
        if(N > 1){
          for(i in 1:(N - 1)){
            for(j in (i + 1):N){
              if(collision.prob > runif(1)){
                tc.i = t1 - diff(z1[c(i, j)]) / diff(v1[c(i, j)])
                if(tc.i <= t1) next
                if(tc.i = t1 & t < t2)
        
        # Current interval's position and and velocity curves
        z[, interval] = z1 + v1 %*% t((t[interval] - t1)) + 
          rep(-g, N) %*% t((t[interval] - t1) ^ 2 / 2)
        v[, interval] = v1 + rep(-g, N) %*% t((t[interval] - t1))
        
        # for(i in 1:N) lines(t[interval], z[i, interval], col = i, lty = 3, lwd = 3)
        
        # Next interval's initial conditions:
        z1 = z1 + v1 * (t2 - t1) - g * (t2 - t1) ^ 2 / 2
        v1 = v1 - g * (t2 - t1)
        
        #  Implement collisions or bounces, whichever would come first:
        if(tc < tb){
          vc = v1
          for(i in 1:dim(colliders)[1]){
            z1[colliders[i,]] = rep(mean(z1[colliders[i,]]), 2)
            v1[colliders[i, 1]] = 
              ((-diff(m[colliders[i,]])) * vc[colliders[i, 1]] +
                 2 * m[colliders[i, 2]] * vc[colliders[i, 2]]) / 
              sum(m[colliders[i,]])
            v1[colliders[i, 2]] = v1[colliders[i, 1]] - diff(vc[colliders[i,]])
          }
        }else{
          v1[bouncers] = -v1[bouncers]
        }
        t1 = t2
        if(t[length(t)] < t1) break
      }
      list(t = t, z = z, v = v, K = 1/2 * m * v ^ 2)
    }
    
    
    # HERE'S WHAT THE TRAJECTORIES LOOK LIKE
    initial.conditions = function(N, m = NA, T = 288){
      if(missing(m)){
        m = seq(24.43433, 48.86866, length.out = N) * amu
      }else{
        if(length(m) != N) stop("Length of m must be N")
      } 
      E.avg = 3/2 * k.B * T  # Energy per molecule in one dimension
      E = N * E.avg * (r = runif(N)) / sum(r)
      v0 = sign(runif(N) - 0.5) * sqrt(2 * (KE = runif(N) * E) / m)
      z0 = (E - KE) / m / g
      list(m = m, z0 = z0, v0 = v0)
    }
    
    N = 4  # Number of molecules
    t = seq(0, 200, 0.1)
    
    inits = initial.conditions(N)
    m = inits$m
    z0 = inits$z0
    v0 = inits$v0
    trial = gas(m, z0, v0, t, 0.75)
    plot(NA, xlim = range(t), ylim = range(trial$z), xlab = "Time (Seconds)",
         ylab = "Altitude (Meters)", 
         main = paste(N, "-Particle-Gas Motion in One Dimension", sep = ""))
    grid()
    for(i in 1:N) lines(t, trial$z[i,], col = i, lwd = 2)
    
    
    #  TO TAKE STATISTICS, WE GENERATE LONG RECORDS, WITH DIFFERENT NUMBERS OF
    #  MOLECULES
    t = 0:1000000
    t = 0:1000
    M = 5
    trials = ab = list()
    ab[[1]] = matrix(c(3/2 * k.B * T, -m[1] * g), nrow = 1)
    for(N in 2:M){
      inits = initial.conditions(N)
      m = inits$m
      z0 = inits$z0
      v0 = inits$v0
      trials[[N]] = gas(m, z0, v0, t, 0.75)
      ab[[N]] = matrix(nrow = N, ncol = 2)
      for(i in 1:N) ab[[N]][i,] = 
        lm(trials[[N]]$K[i,] ~ trials[[N]]$z[i,])$coefficients
    }
    
    #  Plot the lapse rate of the lightest molecule in each trial
    plot(NA, xlim = range(0, -ab[[M]][1, 1] / ab[[M]][1, 2]), 
         ylim = range(0, ab[[2]][1, 1] * 2 / k.B), xlab = "Altitude (Meters)",
         ylab = "Temperature (Kelvins)", 
         main = "Temperature vs. Altitude for\nDifferent System Sizes")
    grid()
    for(i in 1:M) lines(c(0, -ab[[i]][1, 1] / ab[[i]][1, 2]), 
                        c(ab[[i]][1, 1] * 2 / k.B, 0),
                        col = i, lwd = 2)   
    legend("topright", bty = "n", lty = 1, lwd = 2, col = 1:M,
           legend = paste(1:M, "-Molecule System", sep = ""))
    
    #  Compute and plot altitude histograms
    zmax = dmax = 0
    histo = list()
    for(i in 2:M){
      histo[[i]] = hist(trials[[i]]$z[1,], plot = FALSE)
      dmax = max(dmax, histo[[i]]$density)
      zmax = max(zmax, histo[[i]]$breaks)
    }
    plot(NA, xlim = c(0, zmax), ylim = c(0, dmax), xlab = "Altitude (Meters)",
         ylab = "Probability Density (/Meter)", 
         main = "Molecule-Presence Probability\nDensity as Function of Altitude")
    grid()
    for(i in M:2) lines(histo[[i]]$mids, histo[[i]]$density, col = i, lwd = 2)
    legend("topright", col = 2:M, lty = 1, lwd = 2, bty = "n",
           legend = paste(2:M, "-Molecule System", sep = ""))
    
    #  Determine ratios of lapse rate to weight
    simulation.ratio = numeric(M)
    for(i in 1:M) simulation.ratio[i] = 
      -initial.conditions(2)$m[1] * g / ab[[i]][1, 2]
    theoretical.ratio = 3 * (1:M) - 2
    rbind(theoretical.ratio, simulation.ratio)
    
    #  Plot the lapse rate of every molecule in the last trial
    plot(NA, xlim = range(0, -ab[[M]][1, 1] / ab[[M]][1, 2]), 
         ylim = range(0, ab[[M]][1, 1] * 2 / k.B), xlab = "Altitude (Meters)",
         ylab = "Temperature (Kelvins)", 
         main = "Temperature vs. Altitude for\nDifferent Molecule Masses")
    grid()
    for(i in 1:M) lines(c(0, -ab[[M]][i, 1] / ab[[M]][i, 2]), 
                        c(ab[[M]][i, 1] * 2 / k.B, 0),
                        col = i, lwd = 2)   
    legend("topright", bty = "n", lty = 1, lwd = 2, col = 1:M,
           legend = paste(round(m / amu), "AMUs"), title = "Particle Mass")
    
  18. Here’s yet another try at not having code sections eaten:

    g = 9.8 # Gravitational acceleration, m/sec^2
    k.B = 1.38064852E-23 # Boltzmann's constant, J/K
    amu = 1.660539040E-27 # Atomic mass unit, kg
    T = 288 # Temperature, K
    
    gas = function(m, z0, v0, t, collision.prob = 0.5){
      N = length(m)
      if(length(z0) != N | length(v0) != N) 
        stop("m, z0, and v0 must be the same length")
      g = 9.8
      z1 = z0
      v1 = v0
      t1 = t[1]
      v = z = matrix(nrow = N, ncol = length(t))
      repeat{
        #  Decide whether (provisionally) to allow collision and, if collision would
        #  be allowed, which molecules would collide and when:
        tc = Inf
        colliders = NULL
        if(N > 1){
          for(i in 1:(N - 1)){
            for(j in (i + 1):N){
              if(collision.prob > runif(1)){
                tc.i = t1 - diff(z1[c(i, j)]) / diff(v1[c(i, j)])
                if(tc.i <= t1) next
                if(tc.i = t1 & t < t2)
        
        # Current interval's position and and velocity curves
        z[, interval] = z1 + v1 %*% t((t[interval] - t1)) + 
          rep(-g, N) %*% t((t[interval] - t1) ^ 2 / 2)
        v[, interval] = v1 + rep(-g, N) %*% t((t[interval] - t1))
        
        # for(i in 1:N) lines(t[interval], z[i, interval], col = i, lty = 3, lwd = 3)
        
        # Next interval's initial conditions:
        z1 = z1 + v1 * (t2 - t1) - g * (t2 - t1) ^ 2 / 2
        v1 = v1 - g * (t2 - t1)
        
        #  Implement collisions or bounces, whichever would come first:
        if(tc < tb){
          vc = v1
          for(i in 1:dim(colliders)[1]){
            z1[colliders[i,]] = rep(mean(z1[colliders[i,]]), 2)
            v1[colliders[i, 1]] = 
              ((-diff(m[colliders[i,]])) * vc[colliders[i, 1]] +
                 2 * m[colliders[i, 2]] * vc[colliders[i, 2]]) / 
              sum(m[colliders[i,]])
            v1[colliders[i, 2]] = v1[colliders[i, 1]] - diff(vc[colliders[i,]])
          }
        }else{
          v1[bouncers] = -v1[bouncers]
        }
        t1 = t2
        if(t[length(t)] < t1) break
      }
      list(t = t, z = z, v = v, K = 1/2 * m * v ^ 2)
    }
    
    
    # HERE'S WHAT THE TRAJECTORIES LOOK LIKE
    initial.conditions = function(N, m = NA, T = 288){
      if(missing(m)){
        m = seq(24.43433, 48.86866, length.out = N) * amu
      }else{
        if(length(m) != N) stop("Length of m must be N")
      } 
      E.avg = 3/2 * k.B * T  # Energy per molecule in one dimension
      E = N * E.avg * (r = runif(N)) / sum(r)
      v0 = sign(runif(N) - 0.5) * sqrt(2 * (KE = runif(N) * E) / m)
      z0 = (E - KE) / m / g
      list(m = m, z0 = z0, v0 = v0)
    }
    
    N = 4  # Number of molecules
    t = seq(0, 200, 0.1)
    
    inits = initial.conditions(N)
    m = inits$m
    z0 = inits$z0
    v0 = inits$v0
    trial = gas(m, z0, v0, t, 0.75)
    plot(NA, xlim = range(t), ylim = range(trial$z), xlab = "Time (Seconds)",
         ylab = "Altitude (Meters)", 
         main = paste(N, "-Particle-Gas Motion in One Dimension", sep = ""))
    grid()
    for(i in 1:N) lines(t, trial$z[i,], col = i, lwd = 2)
    
    
    #  TO TAKE STATISTICS, WE GENERATE LONG RECORDS, WITH DIFFERENT NUMBERS OF
    #  MOLECULES
    t = 0:1000000
    t = 0:1000
    M = 5
    trials = ab = list()
    ab[[1]] = matrix(c(3/2 * k.B * T, -m[1] * g), nrow = 1)
    for(N in 2:M){
      inits = initial.conditions(N)
      m = inits$m
      z0 = inits$z0
      v0 = inits$v0
      trials[[N]] = gas(m, z0, v0, t, 0.75)
      ab[[N]] = matrix(nrow = N, ncol = 2)
      for(i in 1:N) ab[[N]][i,] = 
        lm(trials[[N]]$K[i,] ~ trials[[N]]$z[i,])$coefficients
    }
    
    #  Plot the lapse rate of the lightest molecule in each trial
    plot(NA, xlim = range(0, -ab[[M]][1, 1] / ab[[M]][1, 2]), 
         ylim = range(0, ab[[2]][1, 1] * 2 / k.B), xlab = "Altitude (Meters)",
         ylab = "Temperature (Kelvins)", 
         main = "Temperature vs. Altitude for\nDifferent System Sizes")
    grid()
    for(i in 1:M) lines(c(0, -ab[[i]][1, 1] / ab[[i]][1, 2]), 
                        c(ab[[i]][1, 1] * 2 / k.B, 0),
                        col = i, lwd = 2)   
    legend("topright", bty = "n", lty = 1, lwd = 2, col = 1:M,
           legend = paste(1:M, "-Molecule System", sep = ""))
    
    #  Compute and plot altitude histograms
    zmax = dmax = 0
    histo = list()
    for(i in 2:M){
      histo[[i]] = hist(trials[[i]]$z[1,], plot = FALSE)
      dmax = max(dmax, histo[[i]]$density)
      zmax = max(zmax, histo[[i]]$breaks)
    }
    plot(NA, xlim = c(0, zmax), ylim = c(0, dmax), xlab = "Altitude (Meters)",
         ylab = "Probability Density (/Meter)", 
         main = "Molecule-Presence Probability\nDensity as Function of Altitude")
    grid()
    for(i in M:2) lines(histo[[i]]$mids, histo[[i]]$density, col = i, lwd = 2)
    legend("topright", col = 2:M, lty = 1, lwd = 2, bty = "n",
           legend = paste(2:M, "-Molecule System", sep = ""))
    
    #  Determine ratios of lapse rate to weight
    simulation.ratio = numeric(M)
    for(i in 1:M) simulation.ratio[i] = 
      -initial.conditions(2)$m[1] * g / ab[[i]][1, 2]
    theoretical.ratio = 3 * (1:M) - 2
    rbind(theoretical.ratio, simulation.ratio)
    
    #  Plot the lapse rate of every molecule in the last trial
    plot(NA, xlim = range(0, -ab[[M]][1, 1] / ab[[M]][1, 2]), 
         ylim = range(0, ab[[M]][1, 1] * 2 / k.B), xlab = "Altitude (Meters)",
         ylab = "Temperature (Kelvins)", 
         main = "Temperature vs. Altitude for\nDifferent Molecule Masses")
    grid()
    for(i in 1:M) lines(c(0, -ab[[M]][i, 1] / ab[[M]][i, 2]), 
                        c(ab[[M]][i, 1] * 2 / k.B, 0),
                        col = i, lwd = 2)   
    legend("topright", bty = "n", lty = 1, lwd = 2, col = 1:M,
           legend = paste(round(m / amu), "AMUs"), title = "Particle Mass")
    
  19. Now we’ll try the “code” tag:

    g <- 9.8 # Gravitational acceleration, m/sec^2
    k.B <- 1.38064852E-23 # Boltzmann's constant, J/K
    amu <- 1.660539040E-27 # Atomic mass unit, kg
    T <- 288 # Temperature, K

    gas <- function(m, z0, v0, t, collision.prob = 0.5){
    N <- length(m)
    if(length(z0) != N | length(v0) != N)
    stop("m, z0, and v0 must be the same length")
    g <- 9.8
    z1 <- z0
    v1 <- v0
    t1 <- t[1]
    v <- z <- matrix(nrow = N, ncol = length(t))
    repeat{
    # Decide whether (provisionally) to allow collision and, if collision would
    # be allowed, which molecules would collide and when:
    tc <- Inf
    colliders 1){
    for(i in 1:(N - 1)){
    for(j in (i + 1):N){
    if(collision.prob > runif(1)){
    tc.i <- t1 - diff(z1[c(i, j)]) / diff(v1[c(i, j)])
    if(tc.i <= t1) next
    if(tc.i < tc){
    tc <- tc.i
    colliders <- matrix(c(i, j), nrow = 1)
    }else if(tc.i == tc){
    colliders <- rbind(colliders, c(i, j))
    }
    }
    }
    }
    }

    # Determine provisional bounce time and which molecules would bounce
    tbs <- numeric(N)
    for(i in 1:N) tbs[i] <- max(Re(polyroot(c(z1[i], v1[i], -g / 2))))
    tb <- min(tbs)
    bouncers <- which(tbs == tb)
    tb <- tb + t1

    # End of current interval is earlier of provisional collision and bounce
    # times:
    t2 <- min(tc, tb)
    interval = t1 & t < t2)

    # Current interval's position and and velocity curves
    z[, interval] <- z1 + v1 %*% t((t[interval] - t1)) +
    rep(-g, N) %*% t((t[interval] - t1) ^ 2 / 2)
    v[, interval] <- v1 + rep(-g, N) %*% t((t[interval] - t1))

    # for(i in 1:N) lines(t[interval], z[i, interval], col = i, lty = 3, lwd = 3)

    # Next interval's initial conditions:
    z1 <- z1 + v1 * (t2 - t1) - g * (t2 - t1) ^ 2 / 2
    v1 <- v1 - g * (t2 - t1)

    # Implement collisions or bounces, whichever would come first:
    if(tc < tb){
    vc <- v1
    for(i in 1:dim(colliders)[1]){
    z1[colliders[i,]] <- rep(mean(z1[colliders[i,]]), 2)
    v1[colliders[i, 1]] <-
    ((-diff(m[colliders[i,]])) * vc[colliders[i, 1]] +
    2 * m[colliders[i, 2]] * vc[colliders[i, 2]]) /
    sum(m[colliders[i,]])
    v1[colliders[i, 2]] <- v1[colliders[i, 1]] - diff(vc[colliders[i,]])
    }
    }else{
    v1[bouncers] <- -v1[bouncers]
    }
    t1 <- t2
    if(t[length(t)] < t1) break
    }
    list(t = t, z = z, v = v, K = 1/2 * m * v ^ 2)
    }

    # HERE'S WHAT THE TRAJECTORIES LOOK LIKE
    initial.conditions <- function(N, m = NA, T = 288){
    if(missing(m)){
    m <- seq(24.43433, 48.86866, length.out = N) * amu
    }else{
    if(length(m) != N) stop("Length of m must be N")
    }
    E.avg <- 3/2 * k.B * T # Energy per molecule in one dimension
    E <- N * E.avg * (r <- runif(N)) / sum(r)
    v0 <- sign(runif(N) - 0.5) * sqrt(2 * (KE <- runif(N) * E) / m)
    z0 <- (E - KE) / m / g
    list(m = m, z0 = z0, v0 = v0)
    }

    N <- 4 # Number of molecules
    t <- seq(0, 200, 0.1)

    inits <- initial.conditions(N)
    m <- inits$m
    z0 <- inits$z0
    v0 <- inits$v0
    trial <- gas(m, z0, v0, t, 0.75)
    plot(NA, xlim = range(t), ylim = range(trial$z), xlab = "Time (Seconds)",
    ylab = "Altitude (Meters)",
    main = paste(N, "-Particle-Gas Motion in One Dimension", sep = ""))
    grid()
    for(i in 1:N) lines(t, trial$z[i,], col = i, lwd = 2)

    # TO TAKE STATISTICS, WE GENERATE LONG RECORDS, WITH DIFFERENT NUMBERS OF
    # MOLECULES
    t <- 0:1000000
    t <- 0:1000
    M <- 5
    trials <- ab <- list()
    ab[[1]] <- matrix(c(3/2 * k.B * T, -m[1] * g), nrow = 1)
    for(N in 2:M){
    inits <- initial.conditions(N)
    m <- inits$m
    z0 <- inits$z0
    v0 <- inits$v0
    trials[[N]] <- gas(m, z0, v0, t, 0.75)
    ab[[N]] <- matrix(nrow = N, ncol = 2)
    for(i in 1:N) ab[[N]][i,] <-
    lm(trials[[N]]$K[i,] ~ trials[[N]]$z[i,])$coefficients
    }

    # Plot the lapse rate of the lightest molecule in each trial
    plot(NA, xlim = range(0, -ab[[M]][1, 1] / ab[[M]][1, 2]),
    ylim = range(0, ab[[2]][1, 1] * 2 / k.B), xlab = "Altitude (Meters)",
    ylab = "Temperature (Kelvins)",
    main = "Temperature vs. Altitude for\nDifferent System Sizes")
    grid()
    for(i in 1:M) lines(c(0, -ab[[i]][1, 1] / ab[[i]][1, 2]),
    c(ab[[i]][1, 1] * 2 / k.B, 0),
    col = i, lwd = 2)
    legend("topright", bty = "n", lty = 1, lwd = 2, col = 1:M,
    legend = paste(1:M, "-Molecule System", sep = ""))

    # Compute and plot altitude histograms
    zmax <- dmax <- 0
    histo <- list()
    for(i in 2:M){
    histo[[i]] <- hist(trials[[i]]$z[1,], plot = FALSE)
    dmax <- max(dmax, histo[[i]]$density)
    zmax <- max(zmax, histo[[i]]$breaks)
    }
    plot(NA, xlim = c(0, zmax), ylim = c(0, dmax), xlab = "Altitude (Meters)",
    ylab = "Probability Density (/Meter)",
    main = "Molecule-Presence Probability\nDensity as Function of Altitude")
    grid()
    for(i in M:2) lines(histo[[i]]$mids, histo[[i]]$density, col = i, lwd = 2)
    legend("topright", col = 2:M, lty = 1, lwd = 2, bty = "n",
    legend = paste(2:M, "-Molecule System", sep = ""))

    # Determine ratios of lapse rate to weight
    simulation.ratio <- numeric(M)
    for(i in 1:M) simulation.ratio[i] <-
    -initial.conditions(2)$m[1] * g / ab[[i]][1, 2]
    theoretical.ratio <- 3 * (1:M) - 2
    rbind(theoretical.ratio, simulation.ratio)

    # Plot the lapse rate of every molecule in the last trial
    plot(NA, xlim = range(0, -ab[[M]][1, 1] / ab[[M]][1, 2]),
    ylim = range(0, ab[[M]][1, 1] * 2 / k.B), xlab = "Altitude (Meters)",
    ylab = "Temperature (Kelvins)",
    main = "Temperature vs. Altitude for\nDifferent Molecule Masses")
    grid()
    for(i in 1:M) lines(c(0, -ab[[M]][i, 1] / ab[[M]][i, 2]),
    c(ab[[M]][i, 1] * 2 / k.B, 0),
    col = i, lwd = 2)
    legend("topright", bty = "n", lty = 1, lwd = 2, col = 1:M,
    legend = paste(round(m / amu), "AMUs"), title = "Particle Mass")

  20. What you would have seen is that it simulates a one-dimensional gas in which (a necessarily small number of) molecules in a gravitational field sometimes collide and sometimes pass through each other.

  21. If you simulate something like a million seconds each for two-, three-, four-, and five-molecule systems, and if for each system regress one molecule’s kinetic energy (temperature) against altitude, you get the following illustration that the lapse rate falls as the number of molecules increases.

  22. Finally, you’d see that within a given gas system the different-molecular-mass constituents have different lapse rates. So at an altitude above zero they maintain different temperatures: temperatures of intimately mixed constituents differ without heat flow between them.

  23. However, I’ve read that the error for the absolute type of sensor (Non-vented) is +/- 2 cm*

    *Because two sensors are required – the errors combine. As both atmospheric pressure and water pressure are measured separately.

  24. I wonder why pressure/depth sensors aren’t being used, as wave motion cancels at the scale found at typical tidal stations. The error for a gauged/vented sensor is just +/- 1 cm.

    However, I’ve read that the error for the absolute type of sensor (Non-vented) is +/- 2 cm*

    *Because two sensors are required – the errors combine. As both atmospheric pressure and water pressure are measured separately.

  25. Warming did happen, from mid-1970s to late 1990s, but it was not, primarily, driven by CO₂.

    Climate Change “Problem” Solved – its Natural. Conclusions:

    – Climate change during recent centuries is periodic
    – Warming since 1870, attributed to CO₂, is really caused by ~200 year (solar) De Vries Cycle
    – Present cooling and increased warming (1970-1997) is due to 65-year AMO/PDO cycles
    – There is no trace of CO₂ causing warming.

    Prof Weiss, youtube:


    Paper:- H.-J. Lüdecke, C. O. Weiss, and A. Hempelmann, 2015. doi:10.5194/cpd-11-279-201

  26. B”y now the causes are obvious. High among them are…

    Incompetent government licensing. Until recently, in the US companies received construction permits based on incomplete plans. Then applied for an operating license, often leading to rebuilding and long delays. …

    Every-changing government policy, often highly adverse.

  27. ==> Willis Eschenbach (December 23, 2017 at 7:26 pm)

    Willis, I want to question this (Your prioritisation of Kirchhoff’s law), as I’ve tried to get a good answer from other sources (I’m sure there would be a simple and sensible answer, that you might offer. One that I’ve failed to find! ;-)

    When researching this several years ago, I came across a PHD physicist who retracted a paper because of this exact issue. He had confused emission and absorption by prioritising Kirchhoff* over Planck**! He had made the mistake of assuming Kirchhoff’s law to be true for all frequencies and materials but this is not the case.

    I mention this only because it appears to be an often confused issue, even for the highly trained. I’m not even remotely in the category of these professionals and that is the point, are we – you – sure that what you are saying is clearly understood, even by those we expect to know these things?

    In my “unwashed” layman’s terms, snow is a white body for sunlight but a black body for heat and even in that “IR” spectrum there is some ill-defined overlap between SW and LW.

    Willis, in response to Ron Clutz (December 23, 2017 at 7:04 pm) you say:

    “Kirchoff’s Law says that absorptivity is equal to emissivity…”

    And yet you quote figures for IR emissivity (9-12 microns) and list fresh snow as 0.99. Yet Ron Clutz had spoken about sunlight*** and therefore, strictly speaking your figure differs from the actual value, which has at maximum a value 0.30 absorptivity for sunlight – that he was presumably speaking of.

    The observation of a material-specific (Snow in this case) difference with regard to absorption and emission is not inconsistent with a good emitter simultaneously being a good absorber. The differences is that absorption and emission are dependent on the wavelength range.

    In the context provided above, what is it that you are specifically disagreeing with in statement of Ron Klutz?***

    *Kirchhoff’s law – “Emission and Absorption are equal”
    **Planck’s law – “Emission is dependent upon the wavelength and the absolute temperature”
    ***”Radiation is only heat if the exposed material absorbs it. Solar radiation is high energy, absorbed by both land and sea and warmed by it. The bit of far longwave radiation from the cool atmosphere is not comparable.” – Ron Clutz

  28. Harriet Harman and Tessa Jowel,
    Both of who’m I’m not a fan.
    They got together with a rotten scheme,
    To implement their evil smoking ban.

    A decade later we now have tories,
    To sell us yet more rotten stories.
    Tales of woe the sky is falling,
    Terrifying! Global Warming!

    And now as if that’s not bad enough,
    Times are about to get more tough.
    With Climate Change and rising seas,
    You’d think we’d be in it up to our knees!

    But don’t forget the Poley Bears,
    We’re about to lose a few more hairs.
    They tell us that the poles are melting,
    And soon we’ll all be sweltering.

    And don’t forget our coloured cousins,
    With them we’re supposed to integrate.
    But every time we try just that,
    They just want to segregate.

    Just when you think its gone far enough,
    They want to make it yet more tough.
    To live our lives is quite a feat,
    They want to tax our bloody meat!

    Now Labour has a brand new champion,
    But I don’t want him as my companion.
    His followers know him as Comrade Corbyn,
    I think he belongs in a dustbin!

    But maybe there’s a glimmer of hope,
    And if we stand together we may cope.
    Their rotten lies may finally end,
    Before they drive us round the bend!

    In God we trust in all our lives,
    To get us all through our strives.
    To end their threat to our liberty,
    We must all fight in sincerity.

    Harriet Harman and Tessa Jowel,
    Both of who’m I find quite foul.
    And all the bloody rest of them,
    should rot away and howl!

    Merry Christmas to all! ;)

  29. I have given up on posting on this site because most of my posts get deleted I asked why but never got an answer. Others have complained about this as well.

  30. Anthony, I have tried 3 times (twice yesterday and once today) to post a comment to Willis’ blog article “Been There, Exceeded That”. Have I fallen foul of the phantom blog impersonator that we had to deal with a year or so ago? All the best, David Cosserat

  31. => Nick Stokes January 11, 2018 at 1:31 am

    “Trenberth’s calculation is very simple. It just multiplies the world’s average rainfall (about 900 mm/yr) by the latent heat of evaporation. Do you think the basis is wrong? Or the rainfall is underestimated”

    The calculation may be simple but the issue clearly isn’t! Just the thought of the “Pan Evaporation Paradox” and the countless papers attending to it, make it clear that the issue is far from simple; but isn’t that the heart of the criticism of it here!

    It is not very hard to see, that “the basis is wrong” and that “rainfall is underestimated”!

    As for the basis, you are making the argument – for Trenberth – that evaporation is the only way to get H2O (In any or all its phases) into the atmosphere, are you not.

    Now I’m immediately thinking how it can** and to quote the poet – “Let me count the ways”! ;-)

    Speaking of numbers, there are approximately a “gazillion” ;-) papers that have been peer reviewed and published in the last century on what is arguably the single most important “way”.

    Known to the layman as sea spray it is technically called “entrainment”. And this contribution from the ocean atmospheric boundary layer under windy conditions is well know and understood to be huge!

    Huge enough, that it “is necessary to take into account storm-caused enhancement of the energy and mass transfer through the ocean surface when constructing climate models and models of general circulation of the atmosphere and ocean, and also when devising methods of long-term weather forecasting.” (Dubov), (Marchuk).

    So, we have according to a vast body of literature* that via this mechanism: “even during a brief stormy period, the ocean is able to deliver to the atmosphere enormous amounts of extra heat and moisture, which can alter substantially the state of the atmosphere over vast regions.” (R.S. Bortkovskii)

    And the key here, is that it is kinetic energy that delivers the water vapour to the atmosphere and not the latent heat of evaporation (The temperature of the boundaries determines how much heat is also exchanged but vaporisation is not dependant on the relative temperatures). Additionally, regarding this heat flux:

    “When the air temperature is quite low (in high latitudes, for example) the spray sensible heat flux can be roughly as large as the spray latent heat flux. In temperate and tropical latitudes, however, the spray latent heat flux virtually always dominates the sensible heat flux. The magnitude of this flux can be quite large. In a 20-m/s wind, in low latitudes, a typical magnitude for the spray latent heat flux is 150 W/m(squared), which is the same order of magnitude as the interfacial latent heat flux.

    Now on a personal note and at the risk of sounding Willis-like, I came across one of the “other ways” while in Zermatt Switzerland! Last year, in the European Alps, I observed and photographed a well known phenomenon, powder snow blowing off the high peaks and forming cirrus clouds:

    Later, in the Dolomites of Italy at altitude again, I met a meteorologist measuring ice core temperatures and we discussed my observation. It wasn’t at all new to him that water vapour in the atmosphere could find its way there independently of the latent heat of evaporation!

    There is much more to this but it it is now very late in a very long day!

    *See Edgar L Andreas, 1992, Sea Spray and Turbulent Air-Sea Heat Fluxes

    **Wind! Think, the Southern Ocean and The Roaring Forties. It’s as good or better than insolation or LWIR!
    Every turbulent stream rapid or waterfall on earth that did or didn’t cast a rainbow.
    Geothermal (Think, magma meets water/ocean… since time began!)
    Storms, cyclones and tornadoes of course and waterspouts at sea – observed much in my youth – that suck up ocean water and anything in it. (My good friend’s ship got hit by one and it disgorged a tonnage of water. And here in Australia, in my lifetime there have been two occasions when fish fell from the sky – along with precipitation – many miles inland!;-)

  32. It’s all evaporation. And it all takes heat from the surface and transports it (as LH) to higher altitude.

    No Nick, even Trenberth admits that the points I’ve raised are worthy of ongoing study. Latent heat is not the only and singular path for water vapour in the atmosphere! And for what it’s worth, I have read his infamous papers!

  33. Talk about Ozploitation!

    This toilet paper has got to go down in history as the finest example of political talking points parading as science, ever written.

    It is so dumb it’s not even wrong!

    For a start, at the time humans arrived, Australia’s inland was covered by vast mega lakes – the remains of the Eromanga Sea:

    …the environment was already changing by the time the first Australians arrived. The overflowing mega-lakes of pre-50,000 years ago had begun to shrink, and reliable supplies of freshwater were in a state of collapse.

    *

    The point is, humans arrived at a time of lowering sea level when the inland was a drying sea. From that time to the present date the inland extent actually expanded while sea levels slowly rose, enough to inundate the shallow land bridge but not the – below sea level – basins of the outback!

    These inland mega-lakes were fed by big rivers such as Cooper Creekand the Diamantina River, which pumped large volumes of water into the continental interior every year to fill the lakes to the levels shown by the position of their ancient beaches. Mega-Lake Eyre held roughly ten times the water volume achievable under today’s wettest climate, and if present now would rank among the ten largest lakes (in area) on Earth. This truly was the inland sea that proved so elusive to Charles Sturt and other 19th-century colonial explorers.

    *

    It is interesting to note:

    “To the surprise of the early mariners who explored Australia’s coastline none of them discovered the mouth of any great river. Consequently, explorers including Flinders, Banks, Oxley, Sturt and King, all assumed that rivers flowing inland from the Great Dividing Range must flow towards an Inland Sea (Flannery 1998, 226; Johnson 2001, 21).”

    They never found the Sea but a huge body of water still exists today, not on the surface but hidden beneath: The Great Artesian Basin.

    “The basin occupies roughly the same area as the Eromanga Sea, the major portion of the water flowing slowly underground from the Great Dividing Range in north Queensland towards South Australia.”

    * “Species-specific responses of Late Quaternary megafauna to climate and humans”: Nature 479, 359–364 (17 November 2011) doi:10.1038/nature10574

  34. Talk about natural climate variation, Lake Eyre fills only intermittently today:

    Minor Flooding: Up to 2 m water covering half the lake: once in 3 years.

    Major Flooding: Up to 4.5 m water covering all 8,000 [km.sup.2] of the lake: once in 10 years.

    Filling: Filling another 50 cm: 2-4 times per century.

    Great Filling: More than 5 m water: 2-4 times per millennium. (Kotwicki 1986)

  35. “Odyssey from Africa (and the Adventures of Ipiki)” is an epic narrative poem telling the story behind the 60,000 years-ago exodus of modern humans from Africa that populated the rest of the world.

  36. Yes, and this theory aligns with the measured data!

    This post also highlights a feeling I’ve had for some time, that both sides are going out of their way to avoid discussing the Pan Evaporation Paradox.

    I think it is because they all have a dog in the game. The data is disruptive because the cause doesn’t have to be explained for the damage to be done and a number of precious theories have been struck a mortal blow!

    If climate is warming, a more energetic hydrologic cycle is expected implying an increase in evaporation. However, observations of pan evaporation across the U.S. and the globe show a decreasing trend in pan evaporation. – J.A. Ramirez, Colorado State University

    And it doesn’t matter where it is measured – wet or dry, desert or tropics, the trend has been down for 68* years to date!

    *For 50 years(1950-2000) the trend was sharply down, before a slight recovery 2000-2010 but sharply down again since then(Back to the near lowest levels of 1993).

  37. Are These Coal Plants “tripping off” because of over heating?
    It seems like news reports are written in a way that suggest that the coal plants are overheating. Poster benben seems to think similarly. The second quote below indicates the media suffers from lousy standards of clarity in it’s reporting.
    For example:
    http://reneweconomy.com.au/coal-unit-trips-in-heatwave-as-tesla-big-battery-cashes-in-85623/

    The Australia Institute, which has documented the coal outages this year and produced a report on the intermittency of coal generators, argues that there should be a reliability obligation for coal and gas plants.

    The report found that over the month of February in 2017, 14 per cent (3600MW) of coal and gas electricity generation capacity across the NEM failed during critical peak demand periods in three states as a result of faults, largely related to the heat.

    A certain irony in this context of the idea of a “reliability obligation”!

    http://reneweconomy.com.au/nsw-coal-fleet-feels-heat-state-risk-system-black-96770/
    Refers to a report by the Energy Security Taskforce.

    The report was commissioned by the NSW government to examine risks to the resilience of the state’s electricity system after it came under pressure on in February during a late summer heatwave, when four major coal and gas units failed in the heat.

    The incident, on February 10, 2017, saw the state narrowly escape a major, grid-wide outage when the capacity of available large thermal generators fell by about 805MW during the peak demand period, largely due to high ambient temperatures and cooling pond temperature limits.

    “Risks from extreme weather are likely to continue to increase and test the resilience of the (NSW) system”, the report says. “Large coal thermal plant generally will not perform as well in extreme hot weather and can also have output limited by environmental constraints, for example, cooling pond temperature limits.”

    A reasonable working theory is:
    I1) A failure of all the recent Assie administrations to deal with the conditions of the climate that we have now and have had for some decades if not the entire history of the Australian electrical grid.
    2) That politicians and campaigners can point fingers at coal plants and warming is a bonus in a game of denial and diversion.

    A missing piece of the data are the recent trends in summer electricity demands.

  38. Fascinating. I see not one, but three subduction zones near the northern and eastern borders of Bangladesh.

    Also this from wikipedia about the effects of the 2004 Indian Ocean Earthquake:

    “There was 10 m (33 ft) movement laterally and 4–5 m (13–16 ft) vertically along the fault line. Early speculation was that some of the smaller islands south-west of Sumatra, which is on the Burma Plate (the southern regions are on the Sunda Plate), might have moved south-west by up to 36 m (120 ft), but more accurate data released more than a month after the earthquake found the movement to be about 20 cm (8 in).[38] Since movement was vertical as well as lateral, some coastal areas may have been moved to below sea level. The Andaman and Nicobar Islands appear to have shifted south-west by around 1.25 m (4 ft 1 in) and to have sunk by 1 m (3 ft 3 in).[39]”

    https://en.wikipedia.org/wiki/2004_Indian_Ocean_earthquake_and_tsunami

  39. Fascinating. I see not one, but three subduction zones near the northern and eastern borders of Bangladesh.

    Also this from wikipedia about the effects of the 2004 Indian Ocean Earthquake:

    “There was 10 m (33 ft) movement laterally and 4–5 m (13–16 ft) vertically along the fault line. Early speculation was that some of the smaller islands south-west of Sumatra, which is on the Burma Plate (the southern regions are on the Sunda Plate), might have moved south-west by up to 36 m (120 ft), but more accurate data released more than a month after the earthquake found the movement to be about 20 cm (8 in).[38] Since movement was vertical as well as lateral, some coastal areas may have been moved to below sea level. The Andaman and Nicobar Islands appear to have shifted south-west by around 1.25 m (4 ft 1 in) and to have sunk by 1 m (3 ft 3 in).[39]”

    https://en.wikipedia.org/wiki/2004_Indian_Ocean_earthquake_and_tsunami

  40. Is it irrational to the “father”? – Toneb

    Sorry to burst your bubble but you have just illustrated why your own argument is illogical.
    This father has already made the wrong choice, the decision to let his daughter travel in any vehicle other than by plane already exposed her to a hundredfold* increase in risk!

    *A 1% chance of death for a car versus 0.01% in a plane!

    Statistically speaking, flying is far safer than driving. However, it may feel more dangerous because risk perception is based on more than facts. – David Ropeik, Harvard School of Public Health.

  41. Someone asked a “what if” question on Roger Pielke’s Twitter feed for this graph.

    one q: if there had been no “climate diplomacy” how much would fossil fuel consumption have increased? // is there a comparison 25 years to compare it to?

    Roger’s answer:

    Great Q.
    1980-1992 FF increased 1.6%/yr
    1992-2016 1.6%/yr

  42. Figure 1. Satellite-measured sea level rise. Errors shown are 95% confidence intervals. Data Source

    That data source from Colorado University’s Sea Level Group is 18 months old, the last entry is 2016.5512

    The last entry from NASA’s Data is 2017.8521170

    Besides that Kip Hansen’s post earlier last month demonstrated that NASA is lowering the earlier rate of sea level rise which in effect allows the claim of acceleration to be made here’s his graph/animation from that post:

    If CU’s Sea Level Group ever publishes a new release it will be interesting to see what they say. After all, over the years, they’ve been telegraphing what they expect to find. All you have to do is read the titles of their various publications

    Why has an acceleration of sea lever rise not been observed during the altimeter era?

    NASA Satellites Detect Pothole on Road to Higher Seas

    Is the detection of accelerated sea level rise imminent?

  43. The adiabatic heating prediction by Holmes is correct to the extent of the accuracy of the ideal gas law to model the states of atmospheric air. As noted by many commenters, an alternative equation of state could be selected to improve accuracy.

    However, this use of an equation of state to calculate a temperature of a given air mass at a known state (ground-level) is an improper use of the thermodynamic equations. Even if one accepts the dubious assumption of adiabatic heating, the use of an equation of state in the form of change in Pressure/Volume (expressed in the form of molar density as you please) results in a prediction of change in temperature between State 1 and State 2 consistent with the assumptions embraced (adiabatic and constant mass in this instance).

    In short, the proffered calculation is a prediction of what engineers call PV-work expressed as a temperature change from some (in this article) unstated reference condition wherein the PV work would be zero m3-kPa. What the number is not is a prediction of the actual temperature. Many who have enjoyed the North American winter will attest that ground-level air temperature can vary quite widely despite relatively small changes in barometric pressure.

    The PV work equation could be as easily used with an adiabatic and isothermal (constant temperature) assumption which would predict exactly zero temperature change (well, duh) and a change in molar density. The equation is useful to calculate a change of energetic condition between two given states; it is not useful to a prior calculate the temperature of a single, given state.

  44. You are so far left (Far gone.) – and therefore so dumbed-down – that you can’t even write an intelligible sentence, much less form a rational argument!

    In what strange universe could the following line even remotely approach the truth:

    ya im finding the same thing all over the world. renewables are winning. unsubsidised. – Steven Mosher

    That fantasy realm resides in your own head and while it most certainly would appear immanent* to you, it is rather less than “imminent” for the rest of us!**

    *Immanent as apposed to imminent!
    **Everyone but you Steven! That same “us” that you presume to speak for!

  45. Thanks for the heads-up Nick.

    If I was a policy maker, I might have been mislead by the technical summary for policymakers!

    The forward lead me astray from the start:

    “The report confirms that warming in the climate system is unequivocal, with many of the observed changes unprecedented over decades to millennia: warming of the atmosphere and the ocean, diminishing snow and ice, rising sea levels and increasing concentrations of greenhouse gases. Each of the last three decades has been successively warmer at the Earth’s surface than any preceding decade since 1850.

    These and other findings confirm and enhance our scientific understanding of the climate system and the role of greenhouse gas emissions; as such, the report demands the urgent attention of both policymakers and the general public.”

    Now I see only errors rather than any deliberate deception on the part of the authors!
    Thanks to your advise, I can now just ignore the section on detection and attribution of climate change:

    D.3 Detection and Attribution of Climate Change

    Human influence has been detected in warming of the atmosphere and the ocean, in changes in the global water cycle, in reductions in snow and ice, in global mean sea level rise, and in changes in some climate extremes (see Figure SPM.6 and Table SPM.1). This evidence for human influence has grown since AR4. It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century. {10.3–10.6, 10.9}

    Thank goodness, you are here to help me read between the lines.

    It is surprising to see just how misinformed the uninformed* might be after reading this document!!

    *The non-technical i.e. Policymakers!
    **Somehow, high latitudes, NH, Spring, last and next centuries all got lost in translation ;-)

    [Thank you for testing the formats here. .mod]

  46. ==>frank climate

    The second quote should have had my bold added:

    Human influence has been detected in warming of the atmosphere and the ocean, in changes in the global water cycle, in reductions in snow and ice…

    I don’t believe sarcasm and satire are necessarily the same thing, perhaps I should have employed a “/wit ” tag also! ;-)

  47. The only people fighting more observation are the skeptics who demanded more observational evidence.

    That’s funny given that the IPCC went out of there way to ignore the entire observational record of C02 in favour of proxy data on which they could tack in an instrumental record*

    Yeah, forget some 90,000 direct and accurate measurements made from rockets and balloons and use proxy data from low resolution ice cores instead! It is not the skeptics fighting more observations, it is the fanatics like you who are frightened by the real!

    *The Keeling Curve is probably where M.Mann got his inspirational idea.

  48. Before we had the Keeling curve*, there were many thousands of chemical measurements from rockets, and balloons. Like Michael Mann, Keeling tacked instrumental data onto a proxy record (Ice cores.) It is a fact that the IPCC ignores the entire record of direct and accurate measurement data (Some 90,000 samples.) that might have been used to reconstruct a real CO2 contour over the last 200 years. This data corroborates recent stomatal studies that show dynamic fluctuation in background concentrations of C02.

    In former times much higher concentrations were measured compared to today (e.g. around 480 pmm(1820), 388 pmm(1857) and 430 pmm(1942).)
    The 19th century average CO2 concentration was 341 ppm compared to the modern 400 ppm.

    Stomatal studies have demonstrated that CO2 levels were significantly higher than is usually reported, up to 425 ppm about 12,750 years ago for example.

    *Callendar’s “Fuel Line“ was the progenitor.

  49. Thank you, a more apt description of the IPCC process has probably never been written:

    It’s an abomination and abuse of science when people pick whatever is suitable to show what they want to be true, without being aware of (or taking into account) the debates that go on in the literature and behind the scenes. – Kristi Silber

    Before we had the Keeling curve*, there were many thousands of chemical measurements from rockets, and balloons. It is a fact that the IPCC ignores the entire record of direct and accurate measurement data that might have been used to reconstruct a real CO2 contour over the last 200 years. That real data corroborates recent stomatal studies that show dynamic fluctuation in background concentrations of C02.

    In the past two centuries the CO2 flux has been dynamic and higher concentrations were measured compared to today (e.g. around 480 pmm(1820), 388 pmm(1857) and 430 pmm(1942).) The 19th century average CO2 concentration was 341 ppm – according to instrumental measurement – compared to the modern 400 ppm from Mauna Loa.

    Stomatal studies have demonstrated that CO2 levels were significantly higher than is usually reported, up to 425 ppm about 12,750 years ago for example.

    *Callendar’s “Fuel Line“ was the progenitor.

  50. ==>Kent Clizbe

    I can’t stand Larry Kummer’s writing either. His missives* actually make my stomach churn. I’m obviously not arguing against the person* – someone I’ve never met – but I am totally against “his” position! I distrust this writer more than any one I have ever read. The patronising persona he portrays in comments to his own posts is particular ridiculous***. It is a constant mystery to me as to why this “figure” – for want of a better word – is given a platform on WUWT and that makes me question every assumption I have about the truth of the reality of this forum. ;-(

    * With the emphasis on miss, i.e. the verb, fail to hit or reach…etc!
    **ad hominem.
    ***Ridiculous to me, of course!

  51. I think you may be trying to lead me, however the authors title is at the top of this post:

    By Larry Kummer. From the Fabius Maximus website

    You would be living in a very small world indeed if you hadn’t heard of Fabian Socialism. That group named themselves after the Roman statesman and general.

    Larry’s site is dedicated* to Quintus Fabious Maximus Verrucosus, (surnamed Cunctator).

    Googling this name will probably tell you more than poor old Larry knows!

    Larry is a thinly disguised authoritarian, a SJW with a degree in Psychology.

    *In my opinion only, naming your site Fabious Maximus, is a clear indication of your intent and ideology.

  52. ==>Samuel C Cogar February 16, 2018 at 6:18 am

    Straight down the memory hole:

    Thanks for posting that little “beauty”, ….. Scott W, ….. and my best and only response to the context of your post is to quote the author of the above posted commentary of what he injected (RED-BOLDFACED TYPE) at the beginning of his “quoted” Op-ed story, …… to wit:

    “It’s difficult to describe all the ways this is stupid.” – Samuel C Cogar February 15, 2018 at 5:04 am

    Good work D*&^%head you can’t even remember what you wrote. I think F$%^#k wit is apt description of your imbecility and I’m sure you don’t wan’t to know what I really think of you! Have a nice day, if you can! ;-)

    [This is not the kind of comment that is welcome on WUWT because of the disguised profanity. You could make the same points without using any form of profanity and probably more effectively. It is approved because it has gone to the TEST thread and so not read by many people other than yourself. If you are testing how far you can push the envelope it is likely the mods will block this from going out. . . mod]

  53. Anthony, mods or mod. I’ve spent several hours cross checking Paul Homewood’s post and I found him to be correct. I’m testing now to post but as I’m late to this post, it seems a shame that my efforts might be lost in the comments.

    cheers, Scott

    ————————————————————————————–

    Ok, Ive taken up the challenge and have carefully examined the certified documents*

    I am a skeptic with no love for NOAA but I have to say that at the outset I thought that Paul Homewood’s claim would prove erroneous.

    Well, he is correct and NOAA should be alerted that their online graphing tool – “Climate at a Glance” – is incorrect.

    I looked at the State, the Division and the subset of existing stations still in use.

    Paul’s statement and his chart above it, are accurate:

    On average the mean temperatures in Jan 2014 were 2.7F less than in 1943. Yet, according to NOAA, the difference was only 0.9F.
    Somehow, NOAA has adjusted past temperatures down, relatively, by 1.8F.

    I carefully checked all his figures against the historical record and they are correct.
    And further, looking at the historical data for the central lakes Division**:

    The Jan 2014 average was 18.2 and the Jan 1943 average was 21.8 as listed (And by re-calculation).

    The difference is -3.6F which – oddly or coincidentally – is also the listed departure from normal in 1943***. Surprisingly perhaps, the 2014 historical data listing shows a -4.6F!**** which is close to the 0.9 figure – as an anomaly – not the difference in the recorded average temperatures.

    The -3.6F in the official documents is even worse than Paul’s 2.7F but oddly it is exactly 0.9F worse!!*****

    There seems to be some programming error because there is some odd symmetry in various numbers as shown above.

    Also for example the 21.8 (Temp in 1943) minus the listed 3.6 anomaly = 18.2! As a very average programmer for much of my working career, I can intuit what has happened. They are using the anomaly to calculate the temp difference from the baseline and in this case the numbers happen to be the same. I can see a broken greater-than-or-equals test/loop bug here! ;-)

    That is all I can suggest and I may be wrong in my conjectures but Paul is correct and NOAA are publishing incorrect data online!

    *PDF files of original hard as copies supplied by NOAA
    **All stations in the division as listed, including Paul’s subset.
    ***The anomaly in 1943 (Using a different baseline).
    ****Different baselines i.e. 1981-2010 v 1901-2000
    *****0.9 you will remember is NOAA’s “current” official difference!

  54. When I read the word ensemble, to describe a group of climate models I don’t think of beautiful music! At best, the discordant noise, might hope to approach the chaotic beauty of an orchestra tuning to a single note (440 Hz). Alas, they start out close and it’s all down hill – or should I say uphill – from there, as they diverge into a cacophony!

    Ensemble is a sad way to describe the mean* of such garbage.

    *Or its probability distribution

  55. Mod, mods, moderator said:

    “[This is not the kind of comment that is welcome on WUWT because of the disguised profanity. You could make the same points without using any form of profanity and probably more effectively. It is approved because it has gone to the TEST thread and so not read by many people other than yourself. If you are testing how far you can push the envelope it is likely the mods will block this from going out. . . mod]”

    This “kind of comment” was never posted, it was a test; full stop!
    And it wasn’t a comment on the post but a reply to a personal attack.

    You might want to check your threads as I was just duplicating what others had done on that very post by “disguising my profanity!”

    I’m hearing you loud and clear though! I will avoid profanity, disguised or otherwise in future!

    On a philosophical note, I don’t believe profanity – if disguised – is actually profane*. If you honestly believe it is, then you are on a slippery slope where you will have to block the use of the word “profanity” eventually!

    Sincerely,

    Scott Bennett

    *A bad word – a profanity – as you say.

    [Just trying to be helpful as I thought that was why you were testing. My mistake. . . mod]

  56. Testing formatting… Please ignore
    Allan – “Rhodesia is a term that was used to describe the former British protectorates of Northern Rhodesia (now Zambia) and Southern Rhodesia (now Zimbabwe). Later Southern Rhodesia declared unilateral independence as Rhodesia”

    True, in that Rhodesia, as in the full pre-independence name “Federation of Rhodesia and Nyasaland”, referred to both Northern and Southern Rhodesia (Nyasaland is now Malawi). However, we were never a protectorate, prior to independence, but a self-governing dominion loyal to the Crown, with a governor, similar to Australia and Canada.

    Otherwise, I agree that Zimbabwe is stuffed.

    Rob

  57. Willis Eschenbach you stated: “While replicable experiments have shown that CRs COULD have a significant effect on cloud formation, to date I know of no studies showing that they DO have such a significant effect.”
    Have you studied the impact of Forbush events on clouds? See:
    Svensmark, J., Enghoff, M. B., Shaviv, N. J. & Svensmark, H. The response of clouds and aerosols to cosmic ray decreases. J. Geophys. Res.: Space Phys. 121, 8152–8181 (2016). Posted at:
    http://orbit.dtu.dk/ws/files/126609957/Svensmark_et_al_2016_Journal_of_Geophysical_Research_Space_Physics.pdf
    This is summarized in:
    Svensmark H, Enghoff MB, Shaviv NJ, Svensmark J. Increased ionization supports growth of aerosols into cloud condensation nuclei. Nature communications. 2017 Dec 19;8(1):2199. https://www.nature.com/articles/s41467-017-02082-2 They review citing ref 7 above:
    “On rare occasions the Sun ejects solar plasma (coronal mass ejections) that may pass Earth, with the effect that the cosmic ray flux decreases suddenly and stays low for a week or two. Such events, with a significant reduction in the cosmic rays flux, are called Forbush decreases, and can be used to test the link between cosmic ray ionization and clouds. A recent comprehensive study identified the strongest Forbush decreases, ranked them according to strength, and disussed some of the controversies that have surrounded this subject7. Atmospheric data consisted of three independent cloud satellite data sets and one data set for aerosols. A clear response to the five strongest Forbush decreases was seen in both aerosols and all low cloud data7. The global average response time from the change in ionization to the change in clouds was ~7 days7, consistent with the above growth rate of ~0.4 nm h−1. The five strongest Forbush decreases (with ionization changes comparable to those observed over a solar cycle) exhibited inferred aerosol changes and cloud micro-physics changes of the order ~2%7. The range of ion production in the atmosphere varies between 2 and 35 ions pairs s−1 cm−337 and from Fig. 1b it can be inferred from that a 20% variation in the ion production can impact the growth rate in the range 1–4% (under the pristine conditions). It is suggested that such changes in the growth rate can explain the ~2% changes in clouds and aerosol change observed during Forbush decreases7. “

  58. How to monster* from authority:

    The EPOCA FAQ is pretty good. The first Q is
    “The ocean is not acidic, and model projections say the oceans won’t ever become acidic. So why call it ocean acidification?”
    The A in EPOCA stands for Acidification.

    – Nick Stokes

    The Church of the Flying Spaghetti Monster (FSM) FAQ is pretty good. The first Q is
    “Can I be a member if I don’t literally believe in the Flying Spaghetti Monster?
    The P stands for Pastafarian i.e Spaghetti.

    *Monster, to show, to prove “de-monster-atively” (demonstratively). From the Latin, monstrare, meaning ‘to demonstrate’, and monere, ‘to warn’.

  59. The Argument from authority is a common form of argument which leads to a logical fallacy.

    The appeal to authority relies on an argument of the form:

    A is an authority on a particular topic
    A says something about that topic
    A is probably correct

    How to monster* from authority:
    The EPOCA FAQ is pretty good. The first Q is
    “The ocean is not acidic, and model projections say the oceans won’t ever become acidic. So why call it ocean acidification?”
    The A in EPOCA stands for Acidification.

    – Nick Stokes

    The Church of the Flying Spaghetti Monster (FSM) FAQ is pretty good. The first Q is
    “Can I be a member if I don’t literally believe in the Flying Spaghetti Monster?
    The P stands for Pastafarian i.e Spaghetti.

    – Scott Bennett

    *Monster, to show, to prove “de-monster-atively” (demonstratively). From the Latin, monstrare, meaning ‘to demonstrate’, and monere, ‘to warn’.

  60. The Argument from authority is a common form of argument which leads to a logical fallacy.

    The appeal to authority relies on an argument of the form:

    A is an authority on a particular topic
    A says something about that topic
    A is probably correct

    How to monster* from authority:

    The EPOCA FAQ is pretty good. The first Q is
    “The ocean is not acidic, and model projections say the oceans won’t ever become acidic. So why call it ocean acidification?”
    The A in EPOCA stands for Acidification. – Nick Stokes

    The Church of the Flying Spaghetti Monster (FSM) FAQ is pretty good. The first Q is
    “Can I be a member if I don’t literally believe in the Flying Spaghetti Monster?
    The P stands for Pastafarian i.e Spaghetti. – Scott Bennett

    *Monster, to show, to prove “de-monster-atively” (demonstratively). From the Latin, monstrare, meaning ‘to demonstrate’, and monere, ‘to warn’.

  61. This is the most appalling abuse of terminology I have ever seen:

    For the carbonate/bicarbonate system, relevant to calcium carbonate dissolution, sea water is well on the acid side of neutral. – Nick Stokes

    OMG!

    What the hell are you talking about? Are you getting a bonus for obsurdit

  62. Writing a comment about the work of other scientists is not a “scientific idea” it is simply a comment, which of course what BCA is all about

    Oh, do come on! Of course it is, what else is science if it isn’t – at the the very least – the sentient* re-cognising sentience!

    Willis mentioned one of his heroes and one of mine is Emilie du Châtelet (1706-1749).

    Du Châtelet was arguably the leading interpreter of modern physics in Europe as well as a master of mathematics, linguistics; and the art of courtship!

    She was at least as well read as her lover Voltaire – correcting him and improving on Newton – essentially through literary review.

    Without her we would not have the “squared” in E=MC! There are books about the evolution of that famous equation and it was her interest in “Energy”
    that connected the work of other scientists, improving on Voltaire’s mv1 to show that multiplying an object’s mass by the square of its velocity (mv2) was a more useful indicator of its energy!

    Again to be clear, it was her acute awareness of the current scientific literature of the time that gave the world a breakthrough. And that makes me wonder further about the “acausal” chain of events that Willis has spoken about
    so intelligently in his post!

    *Why? Because logic is akin to sentience it is a-priori of all study or knowledge.

  63. Willis says: “Comments are limited to 300 words”

    No Willis. The 300 word limit is for “Correspondence Items”. These are described as follows:

    “These items are ‘letters to the Editor’: short comments on topical issues of public and political interest, anecdotal material, or readers’ reactions to informal material published in Nature (for example, Editorials, News, News Features, Books & Arts reviews and Comment pieces).”

    Note that Correspondence pieces are not technical comments on peer-reviewed research papers. Please submit these instead to Brief Communications Arising.”

    We have THREE places which define BCA’s as “comments”:

    1. Comments on recent Nature papers may, after peer review, be published online as Brief Communications Arising

    Hello! Comments may be published online as Brief Communications Arising. How much clearer can you get?

    2. “Brief Communications Arising are exceptionally interesting and timely scientific comments

    3. “Please submit these [ referring to ‘technical comments on peer-reviewed research papers’ in the previous sentence] instead to Brief Communications Arising

    Willis says: “Comments are classed as “Letters to the Editor”

    NO. Nature specifically says Correspondences are classified as Letters to the Editor.

    Willis says: “A BCA is peer-reviewed. A comment is not”

    WRONG. Naure specifically states, “Correspondence submissions are not usually peer-reviewed and so should not contain primary research data.”

    Willis says: “A BCA is allowed two graphics. A comment is not allowed any graphics

    NOT TRUE. Nature says, “Correspondence items should be no longer than 300 words. They do not usually have figures, tables or more than three references.” In Nature, under the BCA tab, BCA submissions are further defined as “manuscripts”, which are then referred to as “comments and replies”.

    Willis says: “BCAs require a “competing financial interests statement” and an “author contributions statement”. Comments do not require either

    NOT TRUE. As noted above, Nature defines BCA’s as comments. Correspondence items do not require the financial statements and author contribution statement.

    Everything Willis has stated about BCA’s not being comments is not true, He has twisted and redefined terms to his own liking.

  64. I will try this once more!

    Seawater is not “well on the acid side of neutral” despite what any chemist might say!

    The important political issue today, is what the laymen hears when they know nothing of the intricate debate that surrounds this “problem.”

    The term “acidification” in relation to the ocean, is a misnomer. No matter how prevalent it is in the discipline of chemistry or in science generally. Why? Because pure water* is smack dab in the middle of the scale.

    H20 is the measure upon which the entire scale is derived. It has the pH of 7 and it is in truth, neither acidic nor caustic (basic).

    What the “political scientists” – the radical left warm-mongers – need to tell the layman is that the fresh water that falls out of the sky is more acidic. And the purest lake on Earth – that might have existed in the Garden of Eden – is even more so! Do they know that? Are you going out of your way to explain the reality? No, it is sensationalist and misleading language that is factually meaningless, in the end!

    Tell them that the ocean’s “Alkalinity” is not a measure of how alkaline it is!
    No don’t bother, it is too close to the truth, they might start to actually think for themselves, and perceive the deception.

    Acid or caustic, tell the average person what actually happens when you add more of the “acid” H20 to sea water! If you add an acid to any solution in the continuum it will become more acidic…Yes? This is your argument or at least, the argument you make to the vast unwashed**

    Now tell them the truth; that it is not the case!

    Adding pure water to an acidic solution decreases the concentration of H+(aq) ions. This causes the pH to increase towards 7.
    Adding pure water to an alkaline solution decreases the concentration of OH-(aq) ions. This causes the pH to decrease towards 7.

    These notions are triadic and all most of us are ever exposed too, taught or ever need to know, is the reasoning of dichotomies, dualities and binary oppositions.

    The average person could not be expected to understand the “duplicity” inherent in marketing of this new scare tactic.

    And that is why it is so very important to be particular in the use of terminology for fear that ‘communication’ might become jargon***.

    *H20, pure distilled water.
    **Average person, non chemist, non scientist or non specialist.
    ***Barbarous, in the archaic sense.

  65. Tsk-tsk by name* and deed!

    Were you opposed to statements about Obama’s war on coal? – Tsk Tsk

    Well, only after he claimed ownership of it! ;-(

    If somebody wants to build a coal-fired power plant, they can. It’s just that it will bankrupt them,” Obama said, responding to a question about his cap-and-trade plan. He later added, “Under my plan … electricity rates would necessarily skyrocket.”- President Barack Obama**

    *Meaning, shame on you!
    **January 2008 interview with the San Francisco Chronicle editorial board.

  66. My night sky/aurora chasing buddy is named Steve and it upsets me (And him) that some dumb official’s nomenclature has overridden all the various global contemporary names that observes have already coined!

    I was blown away when I witnessed “Tiger Stripes” and a “Proton Arc” alone one night, above my home in Tasmania.

  67. The abstract says nothing about global warming. It’s about carbonate concentration and the availability of carbonate-complexed iron. – Nick Stokes

    Yeah that Anthroprogenic Global Warming sure is an inconvenient truth! Particularly given that the abstract’s ability to scare us would be rendered useless if it had made the mistake of mentioning the accompanying warming associated with that predicted rise in C02 by 2100.

    In other words one counteracts the other! But the authors want to have it both ways.

    More warmth good for plankton but if we can just show that reduced iron in these conditions is bad, then bingo!

    Considering that carbonate ions are required for activity of the ferric iron assimilation system, ocean acidification might inhibit iron uptake, perhaps partially offsetting the positive effects of warming -MM41A-03: Iron Bioavailability in High-CO2 Oceans

  68. Old Nick* indeed; stoking the fires of hell!

    The suggestion here is that scientific bodies should supplement their positive recommendations with negative ones where they think information circulating is wrong. I see no difference in principle.

    – Nick Stokes

    What is truly frightening to me, is that you might genuinely believe what you have written here.
    I’m clinging to the tiny hope that you are a shill for the machine but “know not what [you] do”!

    I find your positon creepy in the extreme.

    If you don’t read the newspaper you are uninformed; if you do read the newspaper you are misinformed.

    Twain’s famous quote is ironically illustrative in this context, because it is actually apocryphal!

    It is the “doubting Thomases”, Thomas Fuller(1662)*** and later Thomas Jefferson(1807) that have my ear and my heart.

    …I had rather my Reader should arise hungry from my Book, than surfeited therewith; rather uninformed than misinformed thereby; rather ignorant of what he desireth, than having a falsehood, or (at the best) a conjecture for a truth obtruded upon him.” – Thomas Fuller

    And in reference to newspapers:

    Truth itself becomes suspicious by being put into that polluted vehicle. The real extent of this state of misinformation is known only to those who are in situations to confront facts within their knowledge with the lies of the day.

    I will add, that the man who never looks into a newspaper is better informed than he who reads them; inasmuch as he who knows nothing is nearer to truth than he whose mind is filled with falsehoods & errors. He who reads nothing will still learn the great facts, and the details are all false. – Thomas Jefferson

    However, Mark Twain does cut straight to the chase:

    Often, the surest way to convey misinformation is to tell the strict truth. – Mark Twain

    IMHO:

    I prefer to be informed but I fear being misinformed more then I fear being uninformed! – Scott Wilmot Bennett

    *The Devil

  69. EBM (Evidence Based Medicine) is just Molière’s medicine (*) but with lot of studies with statistics on top; statistics that modern doctors (**) are not able (nor willing) to understand, and even willing to understand inversely: many researchers, doctors, commentators, believe (or pretend, for confort) that a medical study that does not find an effect at the arbitrary and capricious threshold of p<.05, actually demonstrates, or at least suggest, an absence of effect.

    (*) “Le malade imaginaire”/”The Imaginary Invalid”, “Le Médecin malgré lui”/”The doctor/physician in spite of himself”
    (**) incl. those doctors/researchers that pretend to be good at two jobs but suck at both

  70. Twenty Year Impact CH4 in (mk)
    0.003 … Waste/Landfill
    0.001 … Biomass Burning
    0.00 …. Waste Burning
    0.002 … Agriculture
    0.005 … Animal Husbandry
    0.00 … Household fuels
    0.00 … Shipping
    0.00 … Non-Road
    0.00 … Road
    0.00 … Aviation
    0.00 … Industry
    0.001 … Biofuel
    0.006 … Energy

    0.018 … Total

  71. “How can you have a “feedback” to a static temperature?”
    Indeed. You can’t. It makes no sense to even try to quantify it. – Nick Stokes

    Of course you can, because that is the very definitional requirement for unchanging temperature!
    Temperature is a measure of average heat flux. If the temperature is unchanging – static – then the heat input(gain) is equal to heat output(loss) i.e Thermodynamic equilibrium.

    The magnitude of the output(loss) is dependant – in this case – on the particular atmospheric composition* which provides the “feedback” that results in the final measured temperature.

    *With or without “non-condensing greenhouse gases”

  72. If the temperature is unchanging – static – then the heat input(gain) is equal to heat output(loss) i.e Thermodynamic equilibrium. – Scott Bennett

    Whoops! The word “equal” is wrong above!

    What I mean’t is that the relationship (Input/output) is unchanged.

    If the temperature is unchanging, then the flux has a constant rate.

  73. In this case, if Svensmark’s theory about cosmic rays affecting the climate were true, we should see some kind of an eleven-year cycle in the Irish rainfall. Svensmark’s theory is that cloud formation is affected by cosmic ray levels, which in turn are affected by the variations in the sun’s magnetic field that are synchronous with the 11-year sunspot cycle.

    Willis, I have no dog in this and I do agree with your various and rigorous observations that there is no data to support the 11-year sunspot cycle.

    However strictly speaking, “rays affecting the climate” isn’t the same thing as rays affecting rainfall – of course! ;-)

    Nucleation is a “potential” for cloud formation, it isn’t the same thing as humidity nor is cloudiness necessarily correlated to changes in precipitation*.

    As I understand it, the Earth’s magnetic field deflects particles best from equatorial regions but provides little to no protection above 55 degrees magnetic latitude. And even the choice of hemisphere has an influence on observed measurements of total flux; apparently.

    Given that most of Ireland is above 50 degrees geographic and the North magnetic pole is around 80 degrees, it would be right in the zone of increased flux.

    Perhaps “rays” do explain all that rain ;-)

    *Precipitation might be correlated to cloud formation but the causation isn’t direct.

  74. So, here’s a test of that ability. Below is recent sunspot data, along with four datasets A, B, C, and D. The question is, which of the four datasets (if any) is affected by sunspots? – W

    Now for something completely unscientific, from me!

    I took the challenge without reading the entire post.

    Using Mk1 eyeball, dataset D had peaks in the troughs
    and with a slight tilt towards the angle of the trend and a little lag, it seemed to show a promising correlation – of some sort!

    I have no idea what it means, they just look like they belong together! :-)

  75. “EPA said it would no longer use science without publicly available data to craft regulations, honoring a long-sought industry goal”
    In essence, honoring the Fifth and Fourteenth Amendments to the United States Constitution: due process should apply to any accusation of harm, not just a claim of harm of a particular victim, but also to claims to harm to the community or harm of nature. Industrial corporations are persons with constitutional rights, too. (Even if you want to deny these rights to corporations for some reason, the owners of the corporation then can claim the same constitutional protections.)
    Until now, Simon Science Says” was the game; Science Inc. could be represented by any federal body or even the totally “independent” from the federal state “Academy of Sciences” (getting barely almost all its funds from the state). And of course almost nobody in the so called “science community” protested against these egregious abuses, and the ABA didn’t either (I guess ABA was too busy to protect the “right to privacy/abortion” to protect any other right, esp. the right to privacy/refuse medical treatment for a disease one doesn’t have aka a vaccine aka “my body, my choice of vaccines”).
    The deep cause is the irrational cult of “Science”, as if it was pure knowledge – unlike “sport” with all the cheating and bad behavior.
    Science is form of sport competition, with all the lack of sportsmanship that can be seen in most sports. Science is a high stakes domain with a lot of money and sports with the most money are not known for better sportsmanship, honesty and transparency.
    Just because there is competition doesn’t imply that competitors are likely to tell about the cheating of others as:
    – they may have been involved in schemes almost as bad,
    – there is shared interest in protecting the image of the field as a mostly clean one,
    – everybody has a lot to lose if there is a complete investigation of the bad practices.
    Just like in sport, the mainstream media plays along as a few bad cheaters are denounced (human cloning) while a lot of slightly less obvious cheating is implicitly approved. The endless focus on a few cheaters gives the illusion that people in the field do care about fraud.
    Just like there is doping in sports, there is “doping” in “science”, especially in bio-medicine, and it should worry everybody because the gold standard of medicine, “evidence based medicine”, relies entirely on studies that don’t say what people say they do.

  76. MarkW, surely that logic applies universally (to all human organisations).
    “Neutrality is not in human nature. Therefore government any funding of broadcasters is always a bad idea.”

    Indeed, the least bad option would be a brodcaster who has some feedback from the widest possible number of people – such as the electorate.

  77. If gravity is the reason that galaxies hold together and rotate then Newton’s laws require the rotational velocity to decrease with the radius from the center of the galaxy. However, our observations show that the spiral arms of galaxies rotate at the same speed as the galaxy core. This observation confounded scientists who insisted upon the gravity model and this led to Dark Matter being theorized. Scientific crack filler. If more mass was present than observed then Newton’s laws could hold true. I’m not aware of any claim that dark matter has actually been observed.

    Just like with climate science, many other areas of science are built upon assumptions. For climate the assumptions are that the temperature records are worthy of being the foundation of further study, that the models are correct, and that CO2 is “THE” control knob to climate and the sensitivity to CO2 is high. How much “science” gets built upon these assumptions? In astrophysics the assumptions are gravity, accretion disc creation, nuclear fusion stars and general relativity. How much work has been built upon these assumptions and how much are we now learning that doesn’t comply with these assumptions? Question the theory? Blasphemy! Get more crack filler. I don’t think there is any proof of a black hole – despite the fact that so many things are called black holes. No proof of dark matter or dark energy. The recent claim of gravity wave detection by LIGO is something I’m highly dubious of. Gravity is not understood at all. Newton’s equation for it defines it as a function of mass. Mass is also something that isn’t understood outside of gravity or some other force like gravity – unless it is related to energy. So we get things defined circularly. Gravity isn’t defined as a function of time by Newton, so it is hard to understand why it is a wave and why it would have a propagation speed. The stability of the planetary orbits in the solar system seems to come from gravity being an instantaneous force – the Earth rotates around the Sun exactly where it is at this moment – not where the sun was 8.3 minutes ago “when the wave left the sun.”

    There is another theory worth considering – it does not require the invention of dark matter to explain the rotation of galaxies. But it does require a challenge to some cosmological assumptions. I’m providing a link here to a 12 minute Youtube video which explains it. Dr. Donald Scott gives us a preview of the paper he published this month in the journal Progress in Physics. The paper is titled: Birkeland Currents and Dark Matter. I’m also providing a link to the paper.

    http://www.ptep-online.com/2018/PP-53-01.PDF

    The theory is that electromagnetic force – not gravity – explains the rotation of galaxies. This theory is not Scott’s alone, but Scott builds upon the work of others and proposes a model of the Birkeland current – which shows the EM force decaying as 1/(SQRT(r)) (inverse of the square root of the radius from the axis of the current. (A Bessel function). Galaxy rotation seems to comply well with this model – according to Scott.

    For this to make sense, here is the rest of it. This is a part of the EU (Electric Universe) theory of cosmology. It is controversial – and that makes sense, because it challenges the established cosmology of accretion disk formation of stars and planets. It theorizes massive flows of plasmas/charged particles through space in the form of Birkeland currents. These currents are intergalactic. Some other key parts of the theory include: stars form at “z-pinches” in the plasma, stars are connected electrically, a star and its planets are connected electrically, comets are electric (the tail is an ion trail – comets are not “dirty snowballs), and more. A lot of recent data that has come in from space probes lends support to some aspects of this theory: comets are not snow covered but rocky bodies, Voyager didn’t find the heliopause return but the magnetic field continues past the heliopause, counter rotating rings on Jupiter and Saturn behave as Birkeland currents, etc. Instead of dark matter and other crack filler theories, the EU theory is taking a look at the possible role plasma and related electrical phenomenon plays in the cosmos.

    WMW

    • Hi Janice  

      I see you tried to post a YouTube video and it failed in test. I know you think there’s some sort of filter on you…but there isn’t. 

      Just post the YouTube URL from the address line of your browser.  What you posted had a bunch of extra code around it.

      Anthony

      • Testing to see if I can add a graphic to a post.

        Here goes…

        Ps – I appreciate the test page on WUWT.

        WMW

  78. The ice doesn’t have to melt for the sea level to rise; it just needs to break up and fall into the sea. The map shows how much ice in Antarctica is being exposed to an awful lot of freshly-arriving warm water.

    I prefer the judgement of experts. You don’t know what you are talking about and thus you are quite at home here. – Stephanie Hawking

    Good, since you like the experts, you might want to read what the experts say about your warm water theory and how it is able to breach the Southern Antarctic Circumpolar Current front!

    Why has the sea ice cover surrounding Antarctica been increasing slightly, in sharp contrast to the drastic loss of sea ice occurring in the Arctic Ocean? A new NASA-led study finds the geology of Antarctica and the Southern Ocean are responsible. – Son Nghiem (NASA/NOAA/JPL/Caltech team leader)

    Location of the southern Antarctic Circumpolar Current front (white contour), with -1 degree Celsius sea surface temperature lines (black contours) on Sept. 22 each year from 2002-2009, plotted against a chart of the depth of the Southern Ocean around Antarctica. The white cross is Bouvet Island.
    Credits: NASA/JPL-Caltech


    Study Helps Explain Sea Ice Differences at Earth’s Poles

  79. The figure Stephanie Hawking shows is worthy of some discussion. However, I think this figure is actually more helpful:

    Take a look at the bottom part of the figure, which shows a profile of Antarctica which includes ice below and above sea level.

    Stephanie was not clear (to me) with her point in bringing up this figure, but I’ll assume it is why Alarmists often mention the West Antarctic Ice Sheet (WAIS). (I’m not referring to Stephanie here – just using her figure to address what Alarmists usually do with it.) Notice that WAIS is grounded but the ground the ice is on is below sea level by 500-1000 meters. The ice shelves (floating) around WAIS do provide some protection for the grounded WAIS – but the ice is exposed to the oceans. Many know this, and some do not. The Alarmist narrative is that if the ice shelves collapse (which they seem to all do periodically), then WAIS could be unprotected, and it too could “collapse” (or break up). We then hear about how much the sea level will rise, etc., etc.

    How many articles are there on the topic of ice sheets that include the comment: “… and if all the ice melts….” (followed by the list of tragic consequences). This is a ridiculous fear and I’ll explain why – at least ridiculous for it to happen in even a thousand years. The WAIS concern is similarly flawed, but at least there is something there to work with and the Alarmists have a point about the ice being grounded below sea level. So why is this actually not a real concern then?

    Ice shelves and ice sheets are very different. Ice shelves are by their nature structurally weak compared to grounded ice. Remember for an iceberg to float, due to the density difference between ice and water, a chunk of ice will always be 90% underwater and 10% above water. Said another way, the water must be much deeper than the chunk of ice is tall to float it. (For whatever dimension ends up being the height based upon its shape.) From my figure, you can see that WAIS is 500-1000m below sea level but also another 1500-2000m above sea level.

    Those who have concerns about WAIS point out that channels are being generated by water under the ice. This happens in some areas but the size of these channels is miniscule compared to the size of WAIS. The further in the channel gets inland then the less water circulation there will be and what is more likely to happen is that the water will freeze. A 5-10 foot deep channel of water will be covered with 8000-9000 feet of ice perhaps at -50C. The Antarctic bottom water is typically -0.8 to +2C. (I don’t know why people refer to water a few degrees above freezing as “warm”). While sea water freezes at ~-2C (depending upon salinity), remember that once the water freezes the ice has very little residual salinity. Ice in the 2nd and subsequent years continues to reduce salinity through cracking and other processes. The melting point returns to around 0C. So the water near the ice or that finds its way under the ice will have a very difficult time thermodynamically to do any melting. It will be already below 0C or even at 2C is thermodynamically week. The energy flow from the water to the ice will cool the water – and if the ice gets close to 0C then it will take a long time to melt as thermal energy moves slowly with small temperature gradients.

    Recent exploration of the Ross Ice Shelf earlier this year shows that ice is forming under the shelf – not melting.
    https://news.nationalgeographic.com/2018/02/ross-ice-shelf-bore-antarctica-freezing/

    So the ice is cold enough to freeze the water beneath it. And this is a shelf – not a sheet! There is 500-1000m of water under the sheet.

    If melting under the ice produced any local structural weaknesses then at most, you would have a column of ice 9000 feet tall slump a few feet into the water channel – driving the water out an/or freezing it. There is just now way WAIS can break up like an ice shelf. You would need water depth that is approximately 110% the height of the ice sheet such that the ice could float away. These ideas are put forward by people who have never thought about or studied structures. I don’t see the physics involved in a “break up” of the sheet. (This should be good news – but Alarmists fight it!)

    Furthermore, Alarmists talk about ice melting as if it could happen in isolation from the rest of the “global climate”. When ice melts, it cools the air or the water that melts it! When ice is made, the air or water around it gets warmer! Where does the heat come from or go to respectively otherwise? A lot of melting ice will make the atmosphere much colder!

    I want to thank Julius for putting together the blog post. I was actually working on something similar. I’ll just sprinkle in a few points from what I was working on.

    Alarmists talk about a warming world. We are told that the “global average temperature” is now ~16.5C. Some fear that the “global average temperature” will go up by 4C by the end of this century. Using the approach that Julius used, let’s take a look at a few things. First, how much ice could we melt if we could trade all of the heat energy in the atmosphere with the ice sheets. Let’s start with 16.5C. I’ll give you my assumptions and values used in the calculations. Feel free to change the values – you will see that the story doesn’t really change.

    We need to determine the energy in the atmosphere above 0C (since that is the melting point of ice). To do this we need the mass of the atmosphere, the specific heat and the average temperature. I refer you to the figure below which shows a vertical atmospheric temperature profile.

    The x-axis gives you the temperature and the y-axis on the left gives you the altitude in km. The red line shows the temperature as you go from the ground up to the top of the thermosphere. What should be immediately noticed (and interesting) is that it is only the air below around 8,000 feet (2.5km) that is above 0C. The top of the thermosphere is excepted as it has almost no mass and no direct thermal coupling. The mass of the atmosphere is 5.15×10^18 kg. 75% of the mass of the atmosphere is contained in the troposphere. I’ll assume 35% is in the first 2.5km, where temperature is greater than 0C. So, the mass we will use is 1.8×10^18 kg. I’m going to use 8C as the average temperature below 2.5km – it appears to actually be lower, but 8C is conservative. The specific heat of the atmosphere is 1,005 J/kg/C. So, we therefore have:

    1.45×10^22 Joules in the atmosphere available to melt ice.

    Of course, atmospheric energy exists in gradients/bands – and we could never actually get all of this energy circulating over the ice sheets to exchange all of the energy. But because it is an absurd impossibility it also serves as good limit to understand the absolute maximum of what could happen. The reality of course will be much less.

    As for the ice: Here are the numbers I used: Antarctica 3×10^7 km3 at -57C average. Greenland 3×10^6 km3 at -30C average. Specific Heat of ice: 2,108 J/kg/C. Heat of fusion of ice: 3.34×10^5 J/kg/C. So therefore, we have:

    1.36×10^25 Joules in the ice below 0C. It will take this much energy to melt all of the ice to water at 0C.

    So, if we mix all of the energy in the atmosphere with the ice we can melt enough ice to raise sea level by 2.8 inches. I skipped some steps to save time – but I used 220 feet of total sea level rise if all of the ice melts (the commonly accepted figure) and used the ratio of the energies between the ice and the atmosphere. The energy in the ice below 0C is over 900 times that of the atmosphere above 0C. My assumption (for finding the absurd limit) is to assume that we can concentrate the energy of the atmosphere such that it all goes to melting a quantity of ice. It is also possible that the energy would just warm the ice and melt none of it.

    So, we could get ~3 inches of sea level rise if we traded all of the thermal energy in the atmosphere above 0C. Here is something else to consider. To do this would bring the atmosphere of Earth below 2.5km to 0C! This is about 6C colder than the coldest point during a glacial period!

    If the atmosphere heats by 4C in this century, you could trade that energy for 1.4 inches of sea level rise to cool the atmosphere back down.

    A lot more could be said, but I’ll wait to see if there is interest to discuss.

    Did I make any mistakes in the calculations or logic?

    Ps – another interesting point to consider: For any quantity of ice in Antarctica at -57C, if you melt it to water at 0C and call the energy “E” Joules. Then if you add another “E” Joules to that water it will boil! Melting ice at -57C takes a lot of energy! 75% of which is just to transition state.

    William

    [Very effective use of the “Test” page. Thank you. .mod]

  80. Nick you must know that this is not the case in reality and has been disproved particularly for Australia.

    Hansen’s focus was on the isotropic component of the covariance of temperature, which assumes a constant correlation decay in all directions. “In reality atmospheric fields are rarely isotropic.”*

    “It has long been established that spatial scale of climate variables varies geographically
    and depends on the choice of directions (Chen, D. et al. 2016)*.

    Even the BOM has pointed out that for Australian weather:

    The observation of considerable anisotropy is in contrast with Hansen and Lebedeff (1987) …Clearly, anisotropy represents an important characteristic of Australian temperature anomalies, which should be accommodated in analyses of Australian climate variability. – (Jones, D. A. and Trewin, B. 2000)**

    The real world problem of accurately representing the spatial variability (Spatial coherence) of climate data has not yet been resolved:

    Unfortunately, our experience has been that the highly irregular distribution of stations in our network, combined with spatial and temporal variability in the anisotropic correlation model, has precluded the in-depth use of this feature* in analysis, as model parameters, length scales and error variances could not be accurately and robustly computed. – (Jones, D. A. and Trewin, B. 2000)

    And this is a huge problem in my opinion because NASA’s GISS, NOAA’s NCDC the CRU and the satellites follow the convention of averaging measurements within spatial areas and reporting those averages. The fact that gridded products aggregate measurements to create areal averages from point-level data is – of course – very well known and discussed in the literature:

    Similar datasets are produced for many regions, such as those maintained by the U.S. Historical Climatology Network. Climate models also only output estimates of climate measures at the level of areal regions. This means that many of the issues stemming from transferring information from the point level to the grid level that apply to historical data are also relevant for downscaling climate models. – (Director, H., and L. Bornn, 2015)****

    *Chen, D. et al. Satellite measurements reveal strong anisotropy in spatial coherence of
    climate variations over the Tibet Plateau. Sci. Rep. 6, 30304; doi: 10.1038/srep30304 (2016).

    **Jones, D. A. and Trewin, B. 2000. The spatial structure of monthly temperature anomalies over Australia. Aust. Met. Mag. 49, 261-276.
    National Climate Centre, Bureau of Meteorology, Melbourne, Australia

    ***The anisotropy of temperature anomalies!

    ****Director, H., and L. Bornn, 2015: Connecting point-level and gridded moments in the analysis of climate data. J. Climate, 28, 3496–3510, doi:10.1175/JCLI-D-14-00571.1.

  81. thomasjk April 20, 2018 at 9:42 am
    The earlier trend was global, Bob, thus a World War rather than a conflict in which militarily advanced nation-statest tried to bomb a developing country back into the stone age.

  82. Can pictures be hosted on this site? I can’t see how to do that, but I do see some wordpress pictures.
    Unfortunately the picture hosts that I have been using delete my pictures after a few weeks.
    I can’t see an “Add Media button” anywhere.

  83. Here is the graph from a study on C4 plants growth rate as compared to C3 plants which have CO2 pumped in to increase the growth rate. From this study, …https://www.sciencenews.org/article/rising-co2-levels-might-not-be-good-plants-we-thoughtCO2 fertililzation study 041918_EE_CO2_inline_370.png

  84. Talk about slimy writing:

    The Congressional Budget Office estimated that implementing a secret science policy like the one proposed by EPA….

    The “policy” is the Secret Science Reform Act*, i.e. a policy to end secret science!

    The legislation this policy is based on, the HONEST Act**, has received significant opposition from the scientific community and other organizations because of the potential for this policy to exclude data vital to informed decision-making.

    Here’s what The American Chemistry Council (ACC) had to say about the “Honest Act’:

    We are pleased the House today passed H.R. 1430, the HONEST Act; Chairman Smith is to be commended for his leadership and commitment to improve EPA science. It is critical that the regulated community and the public have confidence that decisions reached by EPA are grounded in transparent and reproducible science, while ensuring the protection of confidential business information and competitive intelligence. By ensuring that the EPA utilizes high quality science and shares underlying data used to reach decisions, the HONEST Act would foster a regulatory environment that will allow the U.S. business of chemistry to continue to develop safe, innovative products that Americans depend on in their everyday lives.

    “We urge the Senate to take up the bill and are committed to working with Congress to advance legislation to increase transparency and strengthen public confidence in EPA’s scientific analyses and Agency decision making.”

    *H.R. 1030, Secret Science Reform Act of 2015
    **H.R. 1430, the “Honest and Open New EPA Science Treatment Act of 2017

  85. If you only ever follow a handful of tweeter feeds, follow Katica (famous for the Paul Combetta/stonetear Bleachbit discovery, that may have been the single most significant scandal of the election cycle, possibly the key to victory).

    Katica is right, as usual.

  86. Yet the wisdom of the crowd is actually smarter than either of us. – mikelorrey

    What?

    That is complete and utter BS. The “wisdom of the crowd” is a joke to the inteligência!

    It is the exact opposite of Einstein’s* position on truth!

    The only way this could possibly be “true” is in the narrow statistical sense of watching large numbers of gamblers and betting on that knowledge.

    *The “globes” greatest scientist – ever – since Newton!!!

  87. you clearly do not know me. One reason we invented blockchain technology was to tear down the globalists, the nationalists, the central banks and bankers, the fiat money speculators, the ponzi scammers, the naked shorting hedge funders, and all others that seek to demolish and degrade western civilization. The wealth of the anglosphere was built on sound money: gold and silver. It has decayed under fiat money and become a looters paradise. Blockchain enables digital money that behaves exactly like gold and silver money. That is exactly the opposite of what the globalists want. Please take your ad homs elsewhere. – mikelorrey

    This reply is machine like, are you sure you aren’t a bot?

    Again, who are you people? You have no human sense, you come across as alien, in any sense of the word!

  88. You may or may not be HTML savvy( I’m not, particularly), but if you click the test
    button at the top of the page you will find that text will give you italics, and text will give you bold lettering. Or you can combine both to give bold italics. The emphasis will be there but you won’t be offending folks’ forum and blog protocol sensibilities. I’ve been called out, by the way, for such “offenses” in my early days on the internet. If uncertain as to whether the code works for your comment, you can use the test page to test your entry. Go figure ;)

  89. My Mongrel, Rikki Tikki
    May 22, 2018

    Rikki was 6 months old in this picture. He was a king of dogs. I could walk him, and his mom in San Francisco with no leash on. They knew exactly what I expected of them at all times.Rikki and Me

  90. I’m flummoxed. I have created a chart in Excel, but when I try to paste it into this comment, nothing happens. Neither right-mouse-click and select “Paste” nor Ctrl-V. And I have tried both Comodo Dragon and MS Edge. Still a browser problem? Or my computer? Or just the nut behind the keyboard?

Leave a Reply - if your comment doesn't appear right away, it may have been intercepted by the SPAM filter. Please have patience while our moderation team examines it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.