$\newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\uvect}[1]{\hat{#1}} \newcommand{\abs}[1]{\lvert#1\rvert} \newcommand{\norm}[1]{\lVert#1\rVert} \newcommand{\I}{\mathrm{i}} \newcommand{\ket}[1]{\left|#1\right\rangle} \newcommand{\bra}[1]{\left\langle#1\right|} \newcommand{\braket}[1]{\langle#1\rangle} \newcommand{\op}[1]{\mathbf{#1}} \newcommand{\mat}[1]{\mathbf{#1}} \newcommand{\d}{\mathrm{d}} \newcommand{\pdiff}[3][]{\frac{\partial^{#1} #2}{\partial {#3}^{#1}}} \newcommand{\diff}[3][]{\frac{\d^{#1} #2}{\d {#3}^{#1}}} \newcommand{\ddiff}[3][]{\frac{\delta^{#1} #2}{\delta {#3}^{#1}}} \DeclareMathOperator{\erf}{erf} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\order}{O} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\sech}{sech} $

The WSU Quantum Initiative

The WSU Quantum Initiative

The Quantum Initiative at Washington State University unites efforts in quantum research and workforce development across the university. Key players include members of the Department of Physics and Astronomy, who study fundamental quantum science, and use ultra-cold atoms, non-linear optics, and quantum spins for quantum sensing and quantum computing technologies; members of the School of Electrical Engineering & Computer Science, who study the classical-quantum interface and cryoelectronics; and others across the university who explore quantum applications in areas such as chemistry, mathematics, and hydrogen energy research.

As a land grant institution, WSU is committed to training the quantum-smart workforce needed to support the current quantum revolution in the PNW region. To lead the development of emerging quantum technologies, students need a broad set of skills, including not only the foundation in quantum mechanics provided by our physics program, but a facility with computational and data analysis techniques, and practical hands-on experience with relevant technologies such as electronics, optics, and cryogenics. Supporting these needs, we partner with a new interdisciplinary program called iSciMath, training students to work at the boundaries of traditional academic domains in STEM. The iSciMath program centered at WSU brings together core participants from academia, government, and industry to foster the types of interactions and innovations seen at Bell Labs and Xerox PARC in their heyday, giving students both breadth and depth – as we like to say, a graduate of this program will be a jack of all trades, and a master of some.

Read more…

Docker

Docker

Some notes about using Docker to host various applications like Heptapod, Discourse, CoCalc etc. on Linux and AWS.

Read more…

super_hydro: Superfluid Hydrodynamics Explorer

Vortices in a rotating BEC with a pinning site and tracer particles.

super_hydro: Exploring Superfluids

Nobel laureate Richard Feynman said: "I think I can safely say that nobody really understands quantum mechanics". Part of the reason is that quantum mechanics describes physical processes in a regime that is far removed from our every-day experience – namely, when particles are extremely cold and move so slowly that they behave more like waves than like particles.

This application attempts to help you develop an intuition for quantum behavior by exploiting the property that collections of extremely cold atoms behave as a fluid – a superfluid – whose dynamics can be explored in real-time. By allowing you to interact and play with simulations of these superfluids, we hope you will have fun and develop an intuition for some of the new features present in quantum systems, taking one step closer to developing an intuitive understanding of quantum mechanics.

Beyond developing an intuition for quantum mechanics, this project provides an extensible and general framework for running interactive simulations capable of sufficiently high frame-rates for real-time interactions using high-performance computing techniques such as GPU acceleration. The framework is easily extended to any application whose main interface is through a 2D density plot - including many fluid dynamical simulations.

Read more…

Javascript

JavaScript

This post discusses JavaScript.

Read more…

Prerequisites: Models

Prerequisites: Models

This post describes various models that are useful for demonstrating interesting physics.

Read more…

Bayesian Analysis

Bayesian Analysis

In this post we perform a simple but explicit analysis of a curve fitting using Bayesian techniques.

Read more…

kyle-elsasser

Kyle Elsasser

Kyle Elsasser

Kyle was raised on a farm in the mountains of northern Idaho before becoming a Nuclear Reactor Technician on US Navy submarines and later a firefigher/EMT back in his hometown. His curiosity got the better of him and he attended Eastern Washington University, completing a bachelor's degree in Physics and one in Mathematics in 2017 before continuing to Washington State University to pursue his PhD in Physics.

Currently, he is working jointly under Dr Forbes and Dr Bose to re-derive the Tolman-Oppenheimer-Volkoff (TOV) equations for arbitrary rotation speeds, and has interest in investigating the mechanisms that cause neutron star glitching.

Read more…

chunde-huang

Chunde Huang

Chunde Huang

Chunde comes from China where he got his bachelor degree (Software Engineering) and master degree (Computer Science) from Xiamen University, his major research was computer vision and machine learning. He worked as a professional software engineer for three years with experience on embed system framework development (C++ middleware for Android OS), smart traffic surveillance (object detection and tracking) and distribute system (Content Distribution Network). He started his pursuit of Ph. D in physics in 2013 at WSU.

Read more…

praveer-tiwari

Praveer Tiwari

Praveer Tiwari

Praveer comes from India where he got his BSc-MSc(Research) degree(Physics) from Indian Institute of Science, his major research was on Accretion Disk Modeling and Gravitational Wave Data Analysis. He started his pursuit of Ph.D in physics in 2016 at WSU. Since 2017, he worked in professor Jeffrey McMahon Group learning different aspects of machine learning and computational condensed matter.

Currently, he is working jointly under Dr Forbes and Dr Bose. He is working on constraining the parameters of the equation of state of neutron stars using the gravitation wave detection. He is also working on employing novel machine learning techniques to characterize different aspects of gravitational wave detections.

Read more…

Errata

Errata

This post collects various typos etc. in my publications. If you think you see something wrong that is not listed here, please let me know so I can correct it or include it.

Read more…

Matplotlib subplots and gridspec

Matplotlib Subplot Placement

Here we describe how to local subplots in matplotlib.

In [ ]:
import sys
sys.exec_prefix

Overview

We start with several figure components that we would like to arrange.

In [1]:
%pylab inline --no-import-all

def fig1():
    x = np.linspace(0, 1, 100)
    y = x**2
    plt.plot(x, y)
    plt.xlabel('x'); plt.ylabel('x^2')
    
def fig2():
    x = np.linspace(-1, 1, 100)
    y = x**3
    plt.plot(x, y)
    plt.xlabel('x'); plt.ylabel('x^3')    
Populating the interactive namespace from numpy and matplotlib

Here is a typical way of arranging the plots using subplots:

In [12]:
 
In [13]:
def fig3():
    plt.subplot(121)
    fig1()
    plt.subplot(122)
    fig2()
fig3()

Now, what if we want to locate this composite figure? GridSpec is a good way to start. It allows you to generate a SubplotSpec which can be used to locate the components. We first need to update our previous figure-drawing components to draw-themselves in a SubplotSpec. We can reuse our previous functions (which use top-level plt. functions) if we set the active axis.

In [24]:
from functools import partial
import matplotlib.gridspec
import matplotlib as mpl

def fig3(subplot_spec=None):
    if subplot_spec is None:
        GridSpec = mpl.gridspec.GridSpec
    else:
        GridSpec = partial(mpl.gridspec.GridSpecFromSubplotSpec, 
                           subplot_spec=subplot_spec)
    gs = GridSpec(1, 2)
    ax = plt.subplot(gs[0, 0])
    fig1()
    ax = plt.subplot(gs[0, 1])
    fig2()
fig3()
In [26]:
fig3()
In [25]:
fig = plt.figure(constrained_layout=True)
gs = GridSpec(2, 2, figure=fig)
fig3(subplot_spec=gs[0, 0])
fig3(subplot_spec=gs[1, 1])

Inset Axes

If you want to locate an axis precisely, you can use inset_axes. You can control the location by specifying the transform:

  • bbox_transform=ax.transAxes: Coordinates will be relative to the parent axis.
  • bbox_transform=ax.transData: Coordinates will be relative to the data points in the parent axis.
  • bbox_transform=blended_transform_factory(ax.DataAxes, ax.transAxes): Data coordinates for $x$ and axis coordinates in $y$.

Once this is done, locate the axis by specifying the bounding box and then the location relative to this:

  • bbox_t_anchor=(left, bottom, width, height): Bounding box in the specified coordinate system.
  • loc: Location such as lower left or center.
In [64]:
from matplotlib.transforms import blended_transform_factory
blended_transform_factory?

#inset_axes?#
#trans = transforms.blended_transform_factory(
#    ax.transData, ax.transAxes)
In [62]:
%pylab inline --no-import-all
from mpl_toolkits.axes_grid1.inset_locator import inset_axes

ax = plt.subplot(111)
plt.axis([0, 2, 0, 2])
inset_axes(ax, width="100%", height="100%", 
           bbox_to_anchor=(0.5, 0.5, 0.5, 0.5),
           #bbox_transform=ax.transData, 
           loc='lower left',
           bbox_transform=ax.transAxes,
           borderpad=0)
Populating the interactive namespace from numpy and matplotlib
Out[62]:
<mpl_toolkits.axes_grid1.parasite_axes.AxesHostAxes at 0x11a8ab0f0>

Here we place subaxes at particular locations along $x$.

In [29]:
%pylab inline --no-import-all
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from matplotlib.transforms import blended_transform_factory


ax = plt.subplot(111)
ax.set_xscale('log')
ax.set_xlim(0.01, 1000)
xs = np.array([0.1, 1, 10,100])
exp_dw = np.exp(np.diff(np.log(xs)).min()/2)
for x in xs:
    inset_axes(ax, width="100%", height="70%",
               bbox_to_anchor=(x/exp_dw, 0, x*exp_dw-x/exp_dw, 1),
               #bbox_transform=ax.transData, 
               loc='center',
               bbox_transform=blended_transform_factory(
                   ax.transData,
                   ax.transAxes),
               borderpad=0)
Populating the interactive namespace from numpy and matplotlib
In [19]:
ax = plt.gca()
ax.get_xscale()
Out[19]:
'linear'
In [ ]:
 

Python Performance

Python Performance

Here we compare various approaches to solving some tasks in python with an eye for performance. Don't forget Donald Knuth's words:

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

In other words, profile and optimize after making sure your code is correct, and focus on the places where your profiling tells you you are wasting time.

Read more…

Extended Phase Space

Extended Phase Space

Read more…

Beyond Normal Distributions

In this post we discuss modeling covariances beyond the Gaussian approximation.

Read more…

Workflow

Forbes Group Workflow

This post describes the recommended workflow for developing and documenting code using the Python ecosystem. This workflow uses Jupyter notebooks for documenting code development and usage, python modules with comprehensive testing, version control, online collaboration, and and Conda environments with carefully specified requirements for reproducible computing.

Read more…

Online Course Management at WSU

Online Course Management at WSU

This post contains some notes about how to effectively use Office 365 (OneDrive), and Google Drive (via the G-Suite for education) for teaching a course at WSU. These notes were developed in the Fall of 2018 while I set up the courses for Physics 450 (undergraduate quantum mechanics) and Physics 521 (graduate classical mechanics).

Read more…

Path Integrals

Path Integrals

Based on a discussion with Fred Gittes, we compute the propagator for small systems using a path integral.

Read more…

Logging with Python

In this post, we discuss how to use the Python logging facilities in your code.

Read more…

Hawking Radiation in BECs

Shortcut to Adiabaticity

Shortcut to Adiabaticity

In spin-orbit coupled BECs the changes in the detuning act in the same way as moving a trap, thus one can engineer a way of adiabatically moving a cloud by canceling the two effects.

Read more…

Nikola Website

Nikola for Websites

In this post we describe how to we use Nikola for hosting research websites. Note: this is not a comprehensive discussion of Nikola's features but a streamlined explanation of the assumptions we make for simplified organization.

Read more…

Photos and Git Annex

Photos and Git-Annex

In this post we discuss using git-annex to manage photos.

Read more…

git-annex

git-annex is a tool for managing large data files with git. The idea is to store the information about the file in a git repository that can be synchronized, but to store the actual data separately. The annex keeps track of where the file actually resides (which may be in a different repository, or on another compute) and allows you to control the file (renaming, moving, etc.) without having to have the actual file present.

Here we explore git-annex as a mechanism for replacing and interacting with Dropbox, Google Drive, One Drive etc. with the following goals:

  1. Multiple users can share data.
  2. Data shared across many platforms: HPC clusters, laptops, desktops, Mac, Windows, Linux, CoCalc, etc.
  3. Allow only a subset of data to be stored on any particular device (esp. laptops) if memory on that device is limited.
  4. Utilize cloud storage options including Google Cloud, Dropbox, Microsoft One Drive both as redundant backups, but also as a mechanism for sharing data with others who need to be able to use only once of these services.
  5. Automatic and manual sync options.

Read more…

Thermodynamics

Thermodynamics

This post discusses how phase equilibrium is established. In particular, we discuss multi-component saturating systems which spontaneously form droplets at zero temperature. This was specifically motivated by the discussion of the conditions for droplet formation in Bose-Fermi mixtures 1801.00346 and 1804.03278. Specifically, the following conditions in 1804.03278:

\begin{gather} \mathcal{E} < 0, \qquad \mathcal{P} = 0; \tag{i}\\ \mu_b\pdiff{P}{n_f} = \mu_f\pdiff{P}{n_b}; \tag{ii}\\ \pdiff{\mu_b}{n_b} > 0 , \qquad \pdiff{\mu_f}{n_f} > 0, \qquad \pdiff{\mu_b}{n_b}\pdiff{\mu_f}{n_f} > \left(\pdiff{\mu_b}{n_f}\right)^2. \tag{iii} \end{gather}

1801.00346: https://arxiv.org/abs/1801.00346 1804.03278: https://arxiv.org/abs/1804.03278

Read more…

Creating a Nikola Theme

Massively

Nikola Themes

This posts describes my process of creating a new Nikola theme based on the HTML5 UP template called Massively.

Read more…

Carl Brannen

Write your post here.

Pauli Spin Matrices

In [4]:
import numpy as np

def com(A, B):
    return A.dot(B) - B.dot(A)
sigma = np.array(
    [[[0, 1],
      [1, 0]],
     [[0, -1j],
      [1j, 0]],
     [[1, 0],
      [0, -1]]])

assert np.allclose(com(sigma[0], sigma[1]), 2j*sigma[2]
In [ ]:
 

Quantum Random Walks

Quantum Random Walks

Quantum Random Walks

Here we demonstrate some simple code to explore quantum random walks. To play with this notebook, click on:

Binder

Read more…

Plotting in a Jupyter Notebook

Here we demonstrate various options for plotting in Jupyter notebooks, with a special focus on monitoring time-dependent processes in simulations.

In [1]:
import mmf_setup;mmf_setup.nbinit(theme='mmf)')

This cell contains some definitions for equations and some CSS for styling the notebook. If things look a bit strange, please try the following:

  • Choose "Trust Notebook" from the "File" menu.
  • Re-execute this cell.
  • Reload the notebook.

Test Problem

Here we will mock up a simulation. The visualization consists of a main panel showing the evolution of a wavefunction (possibly a couple of components) which needs to be large to display details. In addition, we would like to have various

Modular Plotting

Often one has several components one would like to (optionally) plot. These often need to be laid out in a stack of plots. Here we discuss how to do this.

Matplotlib

In [51]:
%pylab inline --no-import-all

ts = np.linspace(0, 10, 100)
a = np.exp(-(ts-2)**2/2.0/1.0)
b = np.sin(ts)
c = ts**2/50 - 1.0
Populating the interactive namespace from numpy and matplotlib

Here is a manual construction of the desired plot:

In [255]:
plt.subplot(gs[0,0])
plt.plot(ts, ts)

# Now we construct our desired layout in the right half
gs = GridSpecFromSubplotSpec(
    3, 1, 
    hspace=0.0,
    subplot_spec=gs[0, 1])

# Plot a
ax = plt.subplot(gs[0,0])
ax.plot(ts, a)
ax.set_ylabel('a')
ax.xaxis.set_ticklabels([])
ax.tick_params(axis='x', direction='in')
ax.yaxis.tick_right()
ax.yaxis.set_label_position("right")

# Plot b
ax = plt.subplot(gs[1,0])
ax.plot(ts, b)
ax.set_ylabel('b')
ax.xaxis.set_ticklabels([])
ax.tick_params(axis='x', direction='in')
ax.yaxis.tick_right()
ax.yaxis.set_label_position("right")

# Plot c
ax = plt.subplot(gs[2,0])
ax.plot(ts, c)
ax.set_ylabel('c')
ax.yaxis.tick_right()
ax.yaxis.set_label_position("right")
ax.set_xlabel('t')

# Adjust layout if desired
plt.tight_layout()

Much of the repeated axis manipulation can be done by a function. Here is an idea using a generator to allow the plots to be added:

In [256]:
from matplotlib.gridspec import GridSpec, GridSpecFromSubplotSpec

def vgrid(N, subplot_spec=None, right=False, **kw):
    if subplot_spec is None:
        subplot_spec = plt.subplot(111)
    gs = GridSpecFromSubplotSpec(
        N, 1, subplot_spec=subplot_spec, **kw)
    for n in range(N):
        ax = plt.subplot(gs[n, 0])
        
        if right:
            ax.yaxis.tick_right()
            ax.yaxis.set_label_position("right")
        if n < N - 1:
            ax.xaxis.set_ticklabels([])
            ax.tick_params(axis='x', direction='in')
            
        yield ax
In [257]:
# Here is the overall grid layout
gs = GridSpec(1, 2)

# In the left half we have some plot:
plt.subplot(gs[0,0])
plt.plot(ts, ts)

vg = vgrid(N=3, hspace=0.0, right=True, subplot_spec=gs[0, 1])

# Now we construct our desired layout in the right half
ax = vg.next()
ax.plot(ts, a)
ax.set_ylabel('a')

# Plot b
ax = vg.next()
ax.plot(ts, b)
ax.set_ylabel('b')

# Plot c
ax = vg.next()
ax.plot(ts, c)
ax.set_ylabel('c')
ax.set_xlabel('t')

# Adjust layout if desired
plt.tight_layout()

Here is another idea. We start by making the plots, then inserting them into a layout. This demonstrates the feasibility of adjusting the positions after the plots are generated. Note: we need to do a bit of gymnastics to get unique plot locations initially.

In [258]:
from matplotlib.gridspec import GridSpec, GridSpecFromSubplotSpec

fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(ts, a)
ax.set_ylabel('a')
axs = [ax]

ax = fig.add_subplot(2,1,2)
ax.plot(ts, b)
ax.set_ylabel('b')
axs.append(ax)

ax = fig.add_subplot(3,1,3)
ax.plot(ts, c)
ax.set_ylabel('c')
axs.append(ax)

# Now we know how many plots, we can do the final layout
gs = GridSpec(1, 2)

# In the left half we have some plot:
ax = fig.add_subplot(gs[0,0])
ax.plot(ts, ts)

gs = GridSpecFromSubplotSpec(3, 1, hspace=0.0, subplot_spec=gs[0,1])
#gs = GridSpec(3, 1, hspace=0.0)

ax1, ax2, ax3 = axs
ax2.set_position(gs[1,0].get_position(fig))
ax3.set_position(gs[2,0].get_position(fig))
for n, ax in enumerate(axs):
    subplotspec = gs[n,0]
    ax.set_position(subplotspec.get_position(fig))
    ax.set_subplotspec(subplotspec)  # necessary if using tight_layout()
    ax.yaxis.tick_right()
    ax.yaxis.set_label_position("right")
    if n < len(axs) - 1:
        ax.xaxis.set_ticklabels([])
        ax.tick_params(axis='x', direction='in')
    
plt.tight_layout()

Now we can encapsulate this in a class. We use the same API as above with the generator, but use some heuristics to determine how to form the final layout.

In [261]:
import attr

class Grid(object):
    """Object for stacking plots.
    """    
    def __init__(self, fig=None,
                 subplot_spec=None, right=False, **kw):
        if fig is None:
            fig = plt.gcf()
            
        if subplot_spec is None:
            subplot_spec = GridSpec(1, 1)[0, 0]
            
        self.fig = fig        
        self.subplot_spec = subplot_spec
        self.shape = [0, 0]        
        self.kw = kw
        self.right = right
        self.axes = []
        
    def next(self, down=True, share=False):
        """Return the next axis."""
        if self.shape == [0, 0]:
            # Initialize
            self.shape = [1, 1]
            ax = self.fig.add_subplot(self.subplot_spec)
            self.axes.append(ax)
            return ax
        
        if self.shape != [1, 1]:
            # Set "down" to be consistent with current shape
            down = self.shape[0] > self.shape[1]
            
        if down:
            self.shape[0] += 1
            assert self.shape[1] == 1
        else:
            assert self.shape[0] == 1
            self.shape[1] += 1
            
        Nx, Ny = self.shape
        gs = GridSpecFromSubplotSpec(
            Nx, Ny, 
            subplot_spec=self.subplot_spec,
            **self.kw)

        N = max(Nx, Ny)

        args = {}
        if down:
            gs = [gs[_n, 0] for _n in range(N)]
            if share:
                args['sharex'] = self.axes[-1]
        else:
            gs = [gs[0, _n] for _n in range(N)]
            if share:
                args['sharey'] = self.axes[-1]

        
        self.axes.append(self.fig.add_subplot(gs[-1], **args))

        for ax, subplotspec in zip(self.axes, gs):
            ax.set_position(subplotspec.get_position(self.fig))
            ax.set_subplotspec(subplotspec)  # necessary if using tight_layout()
            if self.right:
                ax.yaxis.tick_right()
                ax.yaxis.set_label_position("right")
            if subplotspec is not gs[-1]:
                ax.xaxis.set_ticklabels([])
                ax.tick_params(axis='x', direction='in')
        return ax
In [262]:
from matplotlib.gridspec import GridSpec, GridSpecFromSubplotSpec

# Here is the overall grid layout
gs = GridSpec(1, 2)

# In the left half we have some plot:
plt.subplot(gs[0,0])
plt.plot(ts, ts)

vg = Grid(wspace=0.0, hspace=0.0, right=True, subplot_spec=gs[0, 1])

# Now we construct our desired layout in the right half
ax = vg.next()
ax.plot(ts, a)
ax.set_ylabel('a')

# Plot b
ax = vg.next()
ax.plot(ts, b)
ax.set_ylabel('b')

# Plot c
ax = vg.next()
ax.plot(ts, c)
ax.set_ylabel('c')
ax.set_xlabel('t')

# Adjust layout if desired
plt.tight_layout()
In [ ]:
 
In [ ]:
 

Irregular Grids

Here we look at discretization using irregular grids.

Read more…

Python Attributes, Parameters, and Traits

Making classes in python can be a bit of a pain due to the need to define various methods like __init__(), __repr__(), etc. to get reasonable behavior. In this post we discuss some alternatives for specifying attributes in python classes.

Read more…

Parallel Computing with IPython

Here we discuss how to use ipyparallel to perform some simple parallel computing tasks.

Read more…

Source Code Projects

Bitbucket This post describes the projects I host at Bitbucket. Note that some of this might be useful for you, but I also include in this list private and incomplete projects. If you think you need access to any of these, please contact me: michael.forbes+python@gmail.com.

Read more…

Uncertanties

Correlation Plot

Uncertainties

Here we discuss the python uncertainties package and demonstrate some of its features.

Read more…

Creating a Post

Creating a Post

Initial Image

The first part of a post should be an image which will appear on cover pages etc. It should be included using one of the following sets of code:

  • The first part of a post should be an image which will appear on cover pages etc. It should be included using one of the following sets of code:

    ![Text](<URL or /images/filename.png>)
    Optional text.
    
In [2]:
print("This is a really long line of actual code.  Is it formatted differently and with wrapping?")
This is a really long line of actual code.  Is it formatted differently and with wrapping?
  • Pure markdown:

    ![Textual description (alt text) of the image](<URL or /images/filename.png>)
    Optional text such as image credit (which could be in the alt text).
    
    • Another list:

      ![Textual description (alt text) of the image](<URL or /images/filename.png>)
      Optional text such as image credit (which could be in the alt text).
      

    This is the simplest option, but has very little flexibility. For example, markdown images cannot be resized, or embeded as links. In principle, this could be nested in HTML with `...` but presently the conversion to HTML passes through [reStructureText](http://docutils.sourceforge.net/rst.html) which does not permit nested markup.

  • Pure HTML.

    <a href="http://dx.doi.org/10.1103/PhysRevLett.118.155301"
        class="image">
       <img alt="Textual description (alt text) of the image."
       src="<URL or /images/filename.png>">
     </a>
    

    This allows you to add links, or any other formatting.

Image Files

Images can either be referenced by URL, or locally. The advantage of local images is that they will be available even when off-line, or if the link breaks, but they require storing the image locally and distributing it.

To manage images, we have a top level folder /images in our website folder (the top level Nikola project) in which we store all of the images. This folder will be copied by Nikola to the site. We refer to these locally using an absolute filename /images/filename.png. In order to make this work in the Jupyter notebooks while editing, we symlink the /images to the directory where the notebook server is running. Thus, if you always start the server from the top level of the Nikola site, no symlinks are required.

A good source of images is:

These are free for use without any restrictions (although attribution is appreciated).

Examples

Images

  • Simple Markdown inclusion of photo with credit in Alt text:

![Photo of Stars by Teddy Kelley on Unsplash](https://unsplash.com/photos/uCzBVrIbdvQ/download)

  • Simple Markdown inclusion of photo with credit as following badge. (The Unsplash image page provides a link on each images page with the badge code which you can just copy and paste.):

Photo of Stars Teddy Kelley

Schematic of an expanding BEC entering the regime of negative mass.

In [ ]:
 

Margin Notes

Schematic of an expanding BEC entering the regime of negative mass.

The first part of the post should be an image.

Negative-Mass Hydrodynamics

Negative mass is a peculiar concept. Counter to everyday experience, an object with negative effective mass will accelerate backward when pushed forward. This effect is known to play a crucial role in many condensed matter contexts, where a particle's dispersion can have a rather complicated shape as a function of lattice geometry and doping. In our work we show that negative mass hydrodynamics can also be investigated in ultracold atoms in free space and that these systems offer powerful and unique controls.

Read more…

Coffee

Coffee

Here I discuss coffee.

Read more…

ZNG

Here we describe the the ZNG formalism for extending the GPE to finite temperatures.

Read more…

Beyond GPE

Here we collect various methods for going beyond GPE.

Read more…

Python Projects

Write your post here.

Galilean Covariance

Galileo

Galilean Covariance

In his "Dialogue Concerning the Two Chief World Systems", Galileo put forth the notion that the laws of physics are the same in any constantly moving (inertial) reference frame. Colloquially this means that if you are on a train, there is no experiment you can do to tell that the train is moving (without looking outside).

This post will explain formally what Galilean covariance means, clarify the difference between covariance and invariance, and elucidate the meaning of Galilean covariance in classical and quantum mechanics. In particular, it will explain the following result obtained by simply changing coordinates, which may appear paradoxical at first:

Consider a modern Lagrangian formulation of a classical object moving without a potential in 1D with coordinates $x$ and $\dot{x}$, and the same object in a moving frame with coordinates $X = x - vt$ and $\dot{X} = \dot{x} - v$. The Lagrangian and conjugate momenta in these frames are:

\begin{align} L[x, \dot{x}, t] &= \frac{m\dot{x}^2}{2}, & p &\equiv \pdiff{L}{\dot{x}} = m\dot{x},\\ L_v[X, \dot{X}, t] &= \frac{m(\dot{X}+v)^2}{2}, & P &\equiv \pdiff{L_v}{\dot{X}} = m(\dot{X}+v) = p. \end{align}

Perhaps surprisingly the conjugate momentum $P$ is the same in the moving frame whereas Galilean invariance implies that one should have a description in terms of $P = m\dot{X} = p - mv$. The later description exists, but requires a somewhat non-intuitive addition to the Lagrangian of a total derivative.

Read more…

Negative Mass Hydrodynamics

Negative-Mass Hydrodynamics

Negative mass is a peculiar concept. Counter to everyday experience, an object with negative effective mass will accelerate backward when pushed forward. This effect is known to play a crucial role in many condensed matter contexts, where a particle's dispersion can have a rather complicated shape as a function of lattice geometry and doping. In our work we show that negative mass hydrodynamics can also be investigated in ultracold atoms in free space and that these systems offer powerful and unique controls.

Read more…

Michael McNeil Forbes

Michael McNeil Forbes

Michael McNeil Forbes

Our universe is an incredible place. Despite its incredible diversity and apparent complexity, an amazing amount of it can be described by relatively simple physical laws referred to as the Standard Model of particle physics. Much of this complexity "emerges" from the interaction of many simple components. Characterizing the behaviour of "many-body" systems forms a focus for much of my research, with applications ranging from some of the coldest places in the universe - cold atom experiments here on earth, to nuclear reactions, the cores of neutrons stars, and the origin of matter in our universe.

Read more…

Ted Delikatny

Ted Delikatny

Ted Delikatny

Ted simulates various phenomena related to quantum turbulence in superfluids, including the dynamics and interactions of vortices, solitons, and domain wall dynamics. Currently Ted is working to understand the phenomenon of self-trapping in BECs with negative-mass hydrodynamics.

Read more…

Khalid Hossain

Khalid Hossain

Khalid Hossain

Khalid comes from Bangladesh where he got his MS in theoretical physics from University of Dhaka. Currently Khalid is simulating two-component superfluid mixtures - Spin-Orbit Coupled Bose Einstein Condensates (BECs) and mixture of Bose and Fermi superfluids. In particular, the interest is in detecting the entrainment (dragging of one component with another) effect, which may shed light on the astrophysical mystery of neutron star glitching.

Read more…

Saptarshi Sarkar

Saptarshi Sarkar

Saptarshi Rajan Sarkar

Saptarshi is currently looking at quantum turbulence in a axially symmetric Bose-Einstein condensate, in which a shockwave is created by a piston. The axially symmetric simulation, although missing some key features like Kelvin waves and vortex reconnections, has a considerably less computational cost, while retaining the shock behaviour. He is also interested in learning about the vortex filament model to look into quantum turbulence in detail.

Md Kamrul Hoque Ome

Md Kamrul Hoque Ome

Md Kamrul Hoque Ome

Ome's primary fields of interest are theoretical nuclear and particle physics. In this regard, he is interested in using field theoretical and numerical techniques to solve equations that describe the subatomic phenomena.

Much of his recent investigations addresses the few-body nuclear physics via chiral effective field theory. In particular, he is studying the light nuclei (e.g. deuteron) at low energies where the effective degrees of freedom are pions and nucleons. Given the chiral potentials, the quantum mechanical analysis of the systems may improve our understanding of the properties of the nuclei.

He also enjoys strong coffee, reading books and being in intellectual environments.

Read more…

Ryan Corbin

Ryan Corbin

Ryan Corbin

Ryan hails from the greater Seattle area. His current research focuses on DFT simulations of nuclei.

Read more…

Undergraduate Research Projects

This page lists projects which might be suitable for undergraduate research.

Read more…

Parking and Commutation

The process of driving and parking can be described using the mathematics of non-commutative algebra which gives some interesting insight into the difficulty of parallel parking. This discussion is motivated and follows a similar discussion from William Burke's book Applied Differential Geometry.

Mathematical Formulation

Unique and Complete

Consider the motion of a car on a flat plane. To mathematically formulate the problem, we must provide a unique and unambigious characterization of the state of the car. This can be done in a four-dimensional configuration space with the following four quantities:

  • $(x, y)$: The position of the car can be described by two numbers to locate where the car is in the plane. Note: to be precise we must specify which point on the car lies at this position, and the algebra might be simplified if we are careful about the placement. As a guess, we start by taking this to be the midpoint of the front axle of the car. In this way, the point $(x, y)$ will move in the direction of the front wheels when driving.
  • $\phi$: The orientation of the car. This angle will denote the direction in which the car points.
  • $\theta$: The orientation of the steering wheels. This will be relative to the car so that $\theta=0$ means the wheels point straight ahead.

Stop and think for a moment: is this complete? Can we describe every possible position of a car with a set of these four quantities? The answer is definitely no as the following diagram indicates:

but with an appropriate restriction placed on the possible condition of the car, you should be able to convince yourself that such a choice of four coordinates is indeed sufficient for our purposes. (If we wish to consider the dynamics of the car, we will need additionally the velocity, and angular velocity, but here we shall just consider the geometric motion of the car.)

The second question you should ask is: "Is such a characterization unique?" Here again the answer is no: $\theta = 0°$ and $\theta = 360°$ correspond to the same configuration. Thus, we need to restrict our angular variables to lie between $-180° \leq \theta,\phi < 180°$ for example. With such a restriction, our characterization is both unique and complete, and thus a suitable mathematical formulation of the problem.

Units

To simplify the mathematics, we further restrict our notation so that $x$ and $y$ are specified in meters, while $\theta$ and $\phi$ are specified in radians. In this way, the configuration of the car can be specified by four pure numbers $(x, y, \theta, \phi)$.

Driving

Driving consists of applying two operations to the car which we shall call drive and steer. Mathematically we represent these as operators $\op{D}_s$ and $\op{S}_\theta$ respectively as following:

  • $\op{D}_s$ means drive forward (keeping the steering wheel fixed) for distance $s$.
  • $\op{S}_\theta$ means rotate the steering wheel through angle $\theta$.

Steering

Steering is the easiest operation to analyze. Given a state $(x_1, y_1, \theta_1, \phi_1)$, steering takes this to the state $\op{S}_{\theta}(x_1, y_1, \theta_1, \phi_1) = (x_1, y_1, \theta_1+\theta, \phi_1)$. In words: steering changes the direction of the steering wheel, but does not change the position of orientation of the car. Mathematically this operation is a translation, but as we shall see later, it is convenient to represent these operations as matrices. Thus, we add one extra component to our vectors which is fixed:

$$ \vect{p}_1 = \begin{pmatrix} x_1\\ y_1\\ \theta_1\\ \phi_1\\ 1 \end{pmatrix} $$

This same trick is often used in computer graphics to allow both rotations and translations to be represented by matrices. With this concrete representation of configurations as a five-dimensional vector who's last component is always $1$, we can thus represent $\op{S}_{\theta}$ as a matrix:

$$ \vect{p}_2 = \mat{S}_{\theta}\cdot\vect{p}_{1}, \qquad \begin{pmatrix} x_1\\y_1\\ \theta_1 + \theta \\ \phi_1\\ 1 \end{pmatrix} = \mat{S}_{\theta} \cdot \begin{pmatrix} x_1\\y_1\\ \theta_1 \\ \phi_1\\ 1 \end{pmatrix},\qquad \mat{S}_{\theta} = \begin{pmatrix} 1\\ & 1\\ && 1 &&\theta\\ &&& 1\\ &&&& 1 \end{pmatrix} $$

Motion

Motion of the car is obtained by applying the drive operator, but this is somewhat more difficult to describe. To specify this, we must work out the geometry of the car. In particular, the motion of the car when $\theta \neq 0$ will depend on the length of the car $L$, or more specifically, the distance between the back and front axles. Without loss of generality (w.l.o.g.), we can assume $L=1$m. (To discuss the motion of longer or shorter cars, we can simply change our units so that the numbers $x$ and $y$ specify the position in units of the length $L$).

To deduce the behaviour, use two vectors $\vect{f}$ and $\vect{b}$ which point to the center of front and back axles respectively. Cars generally have fixed length, so that $\norm{\vect{f} - \vect{b}} = L$ remains fixed. Now, if the car moves forward, then $\vect{b}$ moves in the direction of $\vect{f} - \vect{b}$ while $\vect{f}$ moves in the direction of the steering wheel.

A Better Representation

Polar coordinates are extremely useful for representing vectors in the $x$-$y$ plane. In particular, the representation as a complex number $z = x + \I y = re^{\I\phi}$. Here we suggest a more practical (though less intuitive) representation of the problem in terms of the complex number $z$ describing the position of the car, and the phases $e^{\I\theta}$ for the orientation of the car, and $e^{\I\varphi}$ for the orientation of the steering wheel:

$$ \vect{p} = \begin{pmatrix} re^{\I\phi}\\ e^{\I\theta}\\ e^{\I\varphi} \end{pmatrix} $$

We can now work out how the car moves while driving from the following argument. Let $f=z$ be the center of the front axle and $b$ be the center of the back axle. These satisfy the following relationship where $L$ is the length of the axle and $\theta$ is the orientation of the car:

$$ f - b = L e^{\I\theta}. $$

While driving, the length of the car must not change, the front axle $f$ must move in the direction of the wheels $e^{\I(\theta + \varphi)}$ while the back axle $b$ must move towards the front axle $e^{\I\theta}$. The infinitesimal motion of the car thus satisfies:

$$ \d{f} = e^{\I(\theta + \varphi)}\d{s}, \qquad \d{b} = ae^{\I\theta}\d{s}, \qquad \d{f}-\d{b} = e^{\I\theta}(e^{\I\varphi}-a)\d{s} = Le^{\I\theta}\I\d{\theta}. $$

We must adjust the coefficient $a$ so that the car does not change length, which is equivalent to the condition that $(e^{\I\varphi}-a)\d{s} = (\cos\varphi-a + \I\sin\varphi)\d{s} = \I L\d{\theta}$. Hence, after equating real and imaginary portions:

$$ a = \cos\varphi, \qquad \d{\theta} = \d{s}\sin\varphi/L. $$

The second condition tells us how fast the car rotates. We now have the complete infinitesimal forms for steering and driving:

$$ \d{\op{S}_{\alpha}(\vect{p})} = \begin{pmatrix} 0\\ 0\\ \I e^{\I\varphi}\\ \end{pmatrix} \d{\alpha}, \qquad \d{\op{D}_{s}(\vect{p})} = \begin{pmatrix} e^{\I(\theta + \varphi)}\\ \frac{\I\sin\varphi}{L}e^{\I\theta}\\ 0\\ \end{pmatrix} \d{s}. $$

In terms of the complex numbers where $z=re^{\I\phi}$ we can write these as derivatives:

$$ \op{s} = \pdiff{}{\varphi}, \qquad \op{d} = e^{\I(\theta + \varphi)}\pdiff{}{z} + \frac{\sin\varphi}{L}\pdiff{}{\theta}. $$

Now we can compute the commutator of these operators:

$$ [\op{s}, \op{d}]p = \op{s}\left( e^{\I(\theta+\varphi)}p_{,z} + \frac{\sin\varphi}{L}p_{,\theta} \right) - \op{d}(p_{,\varphi})\\ = \left( \I e^{\I(\theta+\varphi)}p_{,z} + e^{\I(\theta+\varphi)}p_{,z\varphi} + \frac{\cos\varphi}{L}p_{,\theta} + \frac{\sin\varphi}{L}p_{,\theta\varphi} \right) - \left(e^{\I(\theta+\varphi)}p_{,\varphi z} + \frac{\sin\varphi}{L}p_{,\varphi \theta})\right)\\ = \I e^{\I(\theta+\varphi)}p_{,z} + \frac{\cos\varphi}{L}p_{,\theta} = \left(\I e^{\I(\theta+\varphi)}\pdiff{}{z} + \frac{\cos\varphi}{L}\pdiff{}{\theta}\right)p. $$

This is a combination of a rotation and a translation which one can decompose into a pure rotation about some point (Exercise: find the point.)

We now complete the same procedure using the point $b$ as a reference point.

$$ \d{\op{S}_{\alpha}(\vect{p})} = \begin{pmatrix} 0\\ 0\\ \I e^{\I\varphi}\\ \end{pmatrix} \d{\alpha}, \qquad \d{\op{D}_{s}(\vect{p})} = \begin{pmatrix} \cos\varphi e^{\I\theta}\\ \frac{\I\sin\varphi}{L}e^{\I\theta}\\ 0\\ \end{pmatrix} \d{s}. $$$$ \op{s} = \pdiff{}{\varphi}, \qquad \op{d} = \cos\varphi e^{\I\theta}\pdiff{}{z} + \frac{\sin\varphi}{L}\pdiff{}{\theta}. $$$$ [\op{s}, \op{d}]p = \op{s}\left( \cos\varphi e^{\I\theta}p_{,z} + \frac{\sin\varphi}{L}p_{,\theta} \right) - \op{d}(p_{,\varphi})\\ = \left( -\I \sin\varphi e^{\I\theta}p_{,z} + \cos\varphi e^{\I\theta}p_{,z\varphi} + \frac{\cos\varphi}{L}p_{,\theta} + \frac{\sin\varphi}{L}p_{,\theta\varphi} \right) - \left(\cos\varphi e^{\I\theta}p_{,\varphi z} + \frac{\sin\varphi}{L}p_{,\varphi \theta})\right)\\ = -\I \sin \varphi e^{\I\theta}p_{,z} + \frac{\cos\varphi}{L}p_{,\theta}\\ = \left(-\I \sin\varphi e^{\I\theta}\pdiff{}{z} + \frac{\cos\varphi}{L}\pdiff{}{\theta}\right)p. $$

In this case, we see that if we execute this commutator about $\varphi = 0$, then we indeed rotate the car about the center of the back axle without any translation.

Unitary Fermi Gas

In [1]:
import mmf_setup;mmf_setup.nbinit()

This cell adds ['/Users/mforbes/work/mmfbb/forbes_group_website'] to your path, and contains some definitions for equations and some CSS for styling the notebook. If things look a bit strange, please try the following:

  • Choose "Trust Notebook" from the "File" menu.
  • Re-execute this cell.
  • Reload the notebook.

The Unitary Fermi Gas

Here we summarize some features of the unitary Fermi gas (UFG) in harmonic traps.

Read more…

Kamiak

Kamiak Cluster at WSU

Here we document our experience using the Kamiak HPC cluster at WSU.

Resources

Kamiak Specific

General

  • SLURM: Main documentation for the current job scheduler.
  • Lmod: Environment module system.
  • Conda: Package manager for python and other software.

Read more…

Prerequisites

Prerequisites

This post describes the prerequisites that I will generally assume you have if you want to work with me. It also contains a list of references where you can learn these prerequisites. Please let me know if you find any additional resources particularly useful so I can add them for the benefit of others. This list is by definition incomplete - you should regard it as a minimum.

michael.forbes+blog@gmail.com

Read more…

Bogoliubov de Gennes

Here we demonstrate the use of the Bogoliubov-de Gennes (BdG) equations to look at the stability of fluctuations in quantum systems. The examples here are motivated by Gross-Pitaevskii (GP) and related equations, but aimed at explaining qualitatively the meaning and interpretation of the solutions.

Read more…

Mac OS X

CoCalc Workflow (formerly Sage Mathcloud)

We describe various strategies for working with CoCalc including version control, collaboration, and using Dropbox.

Note: In some places, such as my aliases, I still use the acronym SMC which refers to Sage Mathcloud – the previous name for CoCalc.

TL;DR

  1. Setup your local computer (once) as discussed below.
  2. Create a CoCalc project, add network access etc., and add your ssh key, and create an alias on your local computer for convenience.
  3. SSH to your CoCalc project and then do something like this:
ssh smcbec  # Or whatever you called your alias

cd ~        # Do this in the top level of your cocalc project.

cat >> ~/.hgrc <<EOF
[ui]
username = \$LC_HG_USERNAME
merge = emacs
paginate = never

[extensions]
# Builtin extensions:
rebase=
graphlog=
color=
record=
histedit=
shelve=
strip=
#extdiff =
#mq =
#purge =
#transplant =
#evolve =
#amend =

[color]
custom.rev = red
custom.author = blue
custom.date = green
custom.branches = red

[merge-tools]
emacs.args = -q --eval "(ediff-merge-with-ancestor \""$local"\" \""$other"\" \""$base"\" nil \""$output"\")"
EOF

cat >> ~/.gitconfig <<EOF
[push]
    default = simple
[alias]
    lg1 = log --graph --abbrev-commit --decorate --date=relative --format=format:'%C(bold blue)%h%C(reset) - %C(bold green)(%ar)%C(reset) %C(white)%s%C(reset) %C(dim white)- %an%C(reset)%C(bold yellow)%d%C(reset)' --all
    lg2 = log --graph --abbrev-commit --decorate --format=format:'%C(bold blue)%h%C(reset) - %C(bold cyan)%aD%C(reset) %C(bold green)(%ar)%C(reset)%C(bold yellow)%d%C(reset)%n''          %C(white)%s%C(reset) %C(dim white)- %an%C(reset)' --all
    lg = !"git lg1"]
EOF

cat >> ~/.bash_aliases <<EOF
# Add some customizations for mercurial etc.
. mmf_setup

# Specified here since .gitconfig will not expand the variables
git config --global user.name "\${LC_GIT_USERNAME}"
git config --global user.email "\${LC_GIT_USEREMAIL}"
EOF

cat >> ~/.hgignore <<EOF
syntax: glob

*.sage-history
*.sage-chat
*.sage-jupyter
EOF

cat >> ~/.inputrc <<EOF
"\M-[A":       history-search-backward
"\e[A":        history-search-backward
"\M-[B":       history-search-forward
"\e[B":        history-search-forward
EOF

cat >> ~/.mrconfig <<EOF
# myrepos (mr) Config File; -*-Shell-script-*-
# dest = ~/.mrconfig     # Keep this as the 2nd line for mmf_init_setup

include = cat "${MMFHG}/mrconfig"

[DEFAULT]
hg_out = hg out
EOF

pip install --user mmf_setup
anaconda2019

# Should use conda or mamba, but this needs a new
# environment, so we just use pip for now.
pip install --user mmf_setup mmfutils

# Create a work environment and associate a kernel
mamba env create mforbes/work
mkdir -p ~/.local/share/jupyter/kernels/
cd ~/.local/share/jupyter/kernels/
cp -r /ext/anaconda-2019.03/share/jupyter/kernels/python3 work-py
cat > ~/.local/share/jupyter/kernels/work-py/kernel.json <<EOF
# kernel.json
{
 "argv": [
  "/home/user/.conda/envs/work/bin/python",
  "-m",
  "ipykernel_launcher",
  "-f",
  "{connection_file}"
 ],
 "display_name": "work",
 "language": "python"
}
EOF

exit-anaconda

mkdir -p ~/repositories
git clone git://myrepos.branchable.com/ ~/repositories/myrepos
ln -s ~/repositories/myrepos/mr ~/.local/bin/

Setup

Knowledge: To follow these instructions you will need to understand how to work with the linux shell. If you are unfamilliar with the shell, please review the shell novice course. For a discussion of environmental variables see how to read and set environmental and shell variables.

Prior to creating a new project I do the following on my local computer:

  1. Set the following environment variable in one of my startup file. CoCalc automatically sources ~/.bash_aliases if it exists (this is specifeid in ~/.bashrc) so I use it.:

    # ~/.bash_aliases
    ...
    # This hack allows one to forward the environmental variables using
    # ssh since LC_* variables are permitted by default.
    export LC_HG_USERNAME="Michael McNeil Forbes <michael.forbes+bitbucket@gmail.com>"
    export LC_GIT_USEREMAIL="michael.forbes+github@gmail.com"
    export LC_GIT_USERNAME="Michael McNeil Forbes"
    

    Then in my ~/.hgrc file I include the following:

    # ~/.hgrc
    [ui]
    username = $LC_HG_USERNAME
    

    This way, you specify your mercurial username in only one spot - the LC_HG_USERNAME environmental variable. (This is the DRY principle.)

    A similar configuration should be used if you want to use git but variable expansion will not work with git. Instead, we need to set the user and email in the .bash_aliases file with something like:

    # ~/.bash_aliases
    git config --global user.name = "$LC_GIT_USERNAME"
    git config --global user.name = "$LC_GIT_USEREMAIL"
    

    The reason for using a variable name staring with `LC_is that these variables are generally allowed by thesshd` server so that they can be send when one uses ssh to connect to a project (see below).*

  2. Find the host and username for your CoCalc project (look under the project Settings page under the >_ SSH into your project... section) and add these to my local ~/.ssh/config file. For example: CoCalc might say to connect to e397631635174e21abaa7c59fa227655@compute5-us.sagemath.com. This would mean I should add the following to my ~/.ssh/config file:

    # ~/.ssh/config
    Host smc*
      ForwardAgent yes
      SendEnv LC_HG_USERNAME
      SendEnv LC_GIT_USERNAME
      SendEnv LC_GIT_USEREMAIL
    
    Host smcbec
      HostName compute5-us.sagemath.com
      User e397631635174e21abaa7c59fa227655
    

    The SendEnv instruction will then apply to all smc* hosts and sends the LC_HG_USERNAME environmental variable. This allows us to refer to this in the ~/.hgrc file so that version control commands will log based on who issues them, which is useful because CoCalc does not provide user-level authentication (only project level). Thus, if a user sends this, then mercurial can use the appropriate username. (A similar setup with git should be possible). See issue #370 for more information.

  1. Once the project has been created, I add the contents of my ~/.ssh/id_rsa.pub to CoCalc using the web interface for SSH Keys. This allows me to login to my projects. (On other systems, this would be the equivalent of adding it to ~/.ssh/authorized_keys.)
  2. Add any resources for the project. (For example, network access simplifies installing stuff below, and using a fixed host will prevent the compute node from changing so that the alias setup in the next step will keep working. However, you must pay for these upgrades.)
  3. Create a ~/.hgrc file like the following:

    [ui]
     username = $LC_HG_USERNAME
     [extensions]
    
     #####################
     # Builtin extensions:
     rebase=
     graphlog=
     color=
     record=
     histedit=
     shelve=
     strip=
    
     [color]
     custom.rev = red
     custom.author = blue
     custom.date = green
     custom.branches = red
    

    This one enables some extensions I find useful and specifies the username using the $LC_HG_USERNAME environmental variable sent by ssh in the previous step.

  4. Create a ~/.gitconfig file like the following:

    [user]
         name =
         email =
     [push]
         default = simple
     [alias]
         lg1 = log --graph --abbrev-commit --decorate --date=relative --format=format:'%C(bold blue)%h%C(reset) - %C(bold green)(%ar)%C(reset) %C(white)%s%C(reset) %C(dim white)- %an%C(reset)%C(bold yellow)%d%C(reset)' --all
         lg2 = log --graph --abbrev-commit --decorate --format=format:'%C(bold blue)%h%C(reset) - %C(bold cyan)%aD%C(reset) %C(bold green)(%ar)%C(reset)%C(bold yellow)%d%C(reset)%n''          %C(white)%s%C(reset) %C(dim white)- %an%C(reset)' --all
         lg = !"git lg1"
    

    This one provide a useful git lg command and specifies the username using the $LC_GIT_USERNAME etc. environmental variable sent by ssh in the previous step.

  5. Install the mmf_setup package (I do this also in the anaconda3 environment):

    pip install mmf_setup
     anaconda3
     pip3 install mmf_setup
     exit-anaconda
    

    Note: this requires you to have enabled network access in step 2.

  6. (optional) Enable this by adding the following lines to your ~/.bash_aliases file on :

    cat >> ~/.bash_aliases <<EOF
     # Add some customizations for mercurial etc.
     . mmf_setup
     git config --global user.name "\${LC_GIT_USERNAME}"
     git config --global user.email "\${LC_GIT_USEREMAIL}"
     EOF
    

    This will set your $HGRCPATH path so that you can use some of the tools I provide in my mmf_setup package, for example, the hg ccommit command which runs nbstripout to remove output from Jupyter notebooks before committing them.

  7. (optional) I find the following settings very useful for tab completion etc., so I also add the following ~/.inputrc file on CoCalc: (The default configuration has LC_ALL=C so I do not need anything else, but see the comment below.)

    #~/.inputrc
    
     # This file is used by bash to define the key behaviours.  The current
     # version allows the up and down arrows to search for history items
     # with a common prefix.
     #
     # Note: For these to be properly intepreted, you need to make sure your locale
     # is properly set in your environment with something like:
     # export LC_ALL=C
    
     #
     # Arrow keys in keypad mode
     #"\M-OD":        backward-char
     #"\M-OC":        forward-char
     #"\M-OA":        previous-history
     #"\M-OB":        next-history
     #
     # Arrow keys in ANSI mode
     #
     #"\M-[D":        backward-char
     #"\M-[C":        forward-char
     "\M-[A":        history-search-backward
     "\M-[B":        history-search-forward
     #
     # Arrow keys in 8 bit keypad mode
     #
     #"\M-\C-OD":       backward-char
     #"\M-\C-OC":       forward-char
     #"\M-\C-OA":       previous-history
     #"\M-\C-OB":       next-history
     #
     # Arrow keys in 8 bit ANSI mode
     #
     #"\M-\C-[D":       backward-char
     #"\M-\C-[C":       forward-char
     #"\M-\C-[A":       previous-history
     #"\M-\C-[B":       next-history
    
  8. (optional) Update and configure pip to install packages as a user:

    pip install --upgrade pip
     hash -r   # Needed to use new pip before logging in again
    
     mkdir -p ~/.config/pip/
     cat >> ~/.config/pip/pip.conf <<EOF
     [install]
     user = true
     find-links = https://bitbucket.org/mforbes/mypi/
     EOF
    

    The configuration uses my personal index which allows me to point to various revisions of my software.

Custom Environments

TL;DR:

Create an appropriate environment.yml file:

# environment.yml
name: _my_environment
channels:
  - defaults
  - conda-forge
dependencies:
  - python=3
  - matplotlib
  - scipy
  - sympy

  - ipykernel
  - ipython
  #- notebook
  #- numexpr

  - pytest

  # Profiling
  - line_profiler
  #- psutil
  - memory_profiler

  - pip:
    - mmf_setup
    - hg+ssh://hg@bitbucket.org/mforbes/mmfutils-fork@0.4.10

We would like to move towards a workflow with custom conda environments. The idea is described here:

ssh smctov
anaconda2019
conda create -n work3 --clone base   # Clone the base environment

Notes:

Once you have a custom environment, you can locate it and make a custom Jupyter kernel for it. First locate the environment:

$ conda env list
# conda environments:
#
base                  *  /ext/anaconda5-py3
xeus                     /ext/anaconda5-py3/envs/xeus
_gpe                     /home/user/.conda/envs/_gpe

Now copy another specification, then edit the kernel.json file. Here is what I ended up with:

mkdir -p ~/.local/share/jupyter/kernels/
cd ~/.local/share/jupyter/kernels/
cp -r /ext/anaconda2020.02/share/jupyter/kernels/python3 work-py
vi ~/.local/share/jupyter/kernels/work-py/kernel.json

The name of the directory here work-py matches the name of the kernel on my machine where I use the ipykernel package. This allows me to use the same notebooks going back and forth without changing the kernel.

# kernel.json
{
 "argv": [
  "/home/user/.conda/envs/work/bin/python",
  "-m",
  "ipykernel_launcher",
  "-f",
  "{connection_file}"
 ],
 "display_name": "work",
 "language": "python"
}

MayaVi

MayaVi is a nice rendering engine for analyzing 3D data-structures, but poses some problems for use on CoCalc. Here we describe these and how to get it working.

  1. Create an appropriate conda environment and associated kernel as described above. For example:

    1. Create an environment.yml file:

      yml
      # environment.mayavi3.yml
      name: mayavi3
      channels:
        - defaults
        - conda-forge
      dependencies:
        - python=3
        - ipykernel
        - mayavi
        - xvfbwrapper
      
        # jupyter is only needed in the first environment to run 
        #    jupyter nbextension install --py mayavi --user
        # Once this is run from an environment with *both* jupyter
        # and mayavi, it is not needed in future environments.
        - jupyter

      We need jupyter here so we can install the appropriate CSS etc. to allow or rendering.

    2. Activate anaconda and create the mayavi3 environment:

      anaconda2019
      conda env create -f environment.mayavi3.yml
      
    3. Create the appropriate kernel:

      Find the location of current kernels:

      # This path is a kludge.  Check it!  The awk command strips spaces
      # https://unix.stackexchange.com/a/205854
      base_kernel_dir=$(jupyter --paths | grep ext | grep share | awk '{$1=$1;print}')
      echo "'$base_kernel_dir'"
      

      At the time of running, this is: /ext/anaconda-2019.03/share/jupyter

      mkdir -p ~/.local/share/jupyter/kernels/
      cp -r "$base_kernel_dir"/kernels/python3 \
            ~/.local/share/jupyter/kernels/conda-env-mayavi3-py
      vi ~/.local/share/jupyter/kernels/conda-env-mayavi3-py/kernel.json
      
  2. Activate the environment and install the javascript required to render the output:

    anaconda2019
    conda activate mayavi3
    jupyter nbextension install --py mayavi --user
    jupyter nbextension enable mayavi --user --py
    

    This places the javascript etc. in ./.local/share/jupyter/nbextensions/mayavi and adds an entry in ~/.jupyter/nbconfig/notebook.json:

    {
      "load_extensions": {
        "mayavi/x3d/x3dom": true
      }
    }
    
  3. Start a new notebook with your kernel, run it in the classic notebook server ("switch to classic notebook..." under "File").

  4. Start a virtual frame-buffer and then use MayaVi with something like the following in your notebook:

    from xvfbwrapper import Xvfb
     with Xvfb() as xvfb:
         from mayavi import mlab
         mlab.init_notebook()
         s = mlab.test_plot3d()
         display(s)
    

    Alternatively, you can skip the context and do something like:

    from xvfbwrapper import Xvfb
    vdisplay = Xvfb()
    vdisplay.start()
    from mayavi import mlab
    mlab.init_notebook()
    s = mlab.test_plot3d()
    display(s)
    vdisplay.stop()   # Calling this becomes a bit more onerous, but might not be critical
    

    See xvfbwrapper for more details.

System Software

One can definitely build system software from source, linking it into ~/.local/bin/ etc. I am not sure if there is a way of using apt-get or other package managers yet.

Synchronization

The automatic synchronization mechansim of CoCalc has some issues. The issue (#96) is that when you VCS updates the files, it can change the modification date in a way tha triggers the autosave system to revert to a previous version. The symptom is that you initially load the notebook that you want, but within seconds it reverts to an old (or even blank version).

Thus, it is somewhat dangerous to perform a VCS update on CoCalc: you risk losing local work. Note: none of the work is lost - you can navigate to the project page and look for the "Backups" button on the file browser. This will take you to read-only copies of your work which you can use to restore anything lost this way.

Presently, the only safe solution to update files from outside of the UI is to update them in a new directory.

MathJaX

MathJaX is rather slow, so in 2018 CoCalc has enabled KaTeX. Unfortunately, KaTeX is not as feature rich as MathJaX. If you need full MathJaX functionality, then you can revert to MathJaX in your account settings.

RClone for Google Drive etc.

The program rclone provides a command-line interface for several applications like Dropbox and Google Drive. Here are some notes about using it:

  1. Make sure internet access is enabled for you project.
  2. Install it by downloading the binary and installing it in `~/.local/bin/rclone`. The rclone software is already installed on CoCalc.
  3. Add a remote by running rclone config. Some notes:

    • For the "Scope" I chose "Full access all files". More secure would be "Access to files created by rclone only" but this will not work if you add files from another device.
    • If you want to link a specific folder, you can copy the "ID of the root folder" from the last part of the URL when you open the folder in Google Drive. It looks something like "1RAtwfvaJUk4vULw1z1t1qKWJCsJIRqCZ".
    • When asked for "Auto config" choose no - this is a "headless" configuration.

    I choose the name gd for the Google Drive remote. Following the provided link and link your account.

  4. You can now explore with:

    rclone lsd gd:   # Show remote directories
    rclone copy gd:PaperQuantumTurbulence .   # Copy a folder
    rclone check gd:PaperQuantumTurbulence PaperQuantumTurbulence 
    rclone sync gd:PaperQuantumTurbulence PaperQuantumTurbulence  # Like rsync -d

Notebook Extensions

One can use Jupyter notebook extensions (nbextensions) with only the Classic Notebook interface. As per issue 985, extensions will not be supported with the Modern Notebook interface, but their goal is to independently implement useful extensions, so file an issue if there is something you want. In this section, we discuss how to enable extensions with the Classic Interface.

  1. Open your notebook.
  2. At the bottom of the File menu choose File/Switch to classical notebook.... (As per issue 1537, this will not work with Firefox.)
  3. At the bottom of the Edit menu choose Edit/nbextensions config. This will allow you to enable various extensions.

(Old Procedure: Not Needed Anymore)

  1. Enable internet access.
  2. Install the code:

    pip install --user https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tarball/master
    

    Note: this will also install a copy of jupyter which is not ideal and is not the one that is run by default, but it will allow you to configure things.

  3. Symbolically link this to your user directory:

    ln -s ~/.local/lib/python2.7/site-packages/jupyter_contrib_nbextensions/nbextensions .jupyter/
    
  4. Install the extensions:

    jupyter contrib nbextension install --user
    

    This step adds the configuration files.

  5. Restart the server.
  6. Reload your notebooks.

You should now see a new menu item: Edit/nbextensions config. From this you can enable various extensions. You will need to refresh/reload each notebook when you make changes.

Note: This is not a proper installation, so some features may be broken. The Table of Contents (2) feature works, however, which is one of my main uses.)

Raw File Access

If you want to directly access files such as HTML files without the CoCalc interface, you can. This is described here.

Configuration Files

Here are the ultimate contents of the configuration files I have on my computer and on CoCalc. These may get out of date. For up-to-date versions, please see the configurations/complete_systems/cocalc folder on my confgurations project.

Your Computer

~/.bashrc

# ~/.bashrc

export HG_USERNAME="Michael McNeil Forbes <michael.forbes+bitbucket@gmail.com>"
export GIT_USEREMAIL="michael.forbes+github@gmail.com"
export GIT_USERNAME="Michael McNeil Forbes"
export LC_HG_USERNAME="${HG_USERNAME}"
export LC_GIT_USERNAME="${GIT_USERNAME}"
export LC_GIT_USEREMAIL="${GIT_USEREMAIL}"

~/.ssh/config

# ~/.ssh/config:
Host smc*
  HostName ssh.cocalc.com
  ForwardAgent yes
  SendEnv LC_HG_USERNAME
  SendEnv LC_GIT_USERNAME
  SendEnv LC_GIT_USEREMAIL

Host smcbec...      # Some convenient name
    User 01a3...    # Use the code listed on CoCalc

CoCalc Project

~/.bash_alias

# Bash Alias File; -*-Shell-script-*-
# dest = ~/.bash_alias     # Keep this as the 2nd line for mmf_init_setup
#
# On CoCalc, this file is automatically sourced, so it is where you
# should keep your customizations.
#
# LC_* variables:
#
# Since each user logs in with the same user-id (specific to the
# project), I use the following mechanism to keep track of who is
# logged in for the purposes of using version control like hg and git.
#
# You should define the following variables on your home machine and then tell
# ssh to forward these:
#
# ~/.bashrc:
#
#     export HG_USERNAME="Michael McNeil Forbes <michael.forbes+bitbucket@gmail.com>"
#     export GIT_USEREMAIL="michael.forbes+github@gmail.com"
#     export GIT_USERNAME="Michael McNeil Forbes"
#     export LC_HG_USERNAME="${HG_USERNAME}"
#     export LC_GIT_USERNAME="${GIT_USERNAME}"
#     export LC_GIT_USEREMAIL="${GIT_USEREMAIL}"
#
# ~/.ssh/config:
#
#     Host smc*
#       HostName ssh.cocalc.com
#       ForwardAgent yes
#       SendEnv LC_HG_USERNAME
#       SendEnv LC_GIT_USERNAME
#       SendEnv LC_GIT_USEREMAIL
#     Host smc...      # Some convenient name
#       User 01a3...   # Use the code listed on CoCalc
#
# Then you can `ssh smc...` and your username will be forwarded.

# This content inserted by mmf_setup
# Add my mercurial commands like hg ccom for removing output from .ipynb files
. mmf_setup


# Specified here since .gitconfig will not expand the variables
git config --global user.name "\${LC_GIT_USERNAME}"
git config --global user.email "\${LC_GIT_USEREMAIL}"

export INPUTRC=~/.inputrc
export SCREENDIR=~/.screen

# I structure my projects with a top level repositories directory
# where I include custom repos.  The following is installed by:
#
# mkdir ~/repositories
# hg clone ssh://hg@bitbucket.org/mforbes/mmfhg ~/repositories/mmfhg
export MMFHG=~/repositories/mmfhg
export HGRCPATH="${HGRCPATH}":"${MMFHG}"/hgrc

export EDITOR=vi

# Finding stuff
function finda {
    find . \(                                                                    \
     -name ".hg" -o -name ".ipynb_checkpoints" -o -name "*.sage-*" \) -prune \
     -o -type f -print0 | xargs -0 grep -H "${*:1}"
}

function findf {
    find . \(                                                                    \
     -name ".hg" -o -name ".ipynb_checkpoints" -o -name "*.sage-*" \) -prune \
     -o -type f -name "*.$1" -print0 | xargs -0 grep -H "${*:2}"
}

# I used to use aliases, but they cannot easily be overrriden by
# personalzed customizations.
function findpy { findf py "${*}"; }
function findipy { findf ipynb "${*}"; }
function findjs { findf js "${*}"; }
function findcss { findf css "${*}"; }

~/.inputrc

# Bash Input Init File; -*-Shell-script-*-
# dest = ~/.inputrc     # Keep this as the 2nd line for mmf_init_setup

# This file is used by bash to define the key behaviours.  The current
# version allows the up and down arrows to search for history items
# with a common prefix.
#
# Note: For these to be properly intepreted, you need to make sure your locale
# is properly set in your environment with something like:
# export LC_ALL=C

"\M-[A":       history-search-backward
"\M-[B":       history-search-forward
"\e[A":        history-search-backward
"\e[B":        history-search-forward

~/.hgrc

# Mercurial (hg) Init File; -*-Shell-script-*-
# dest = ~/.hgrc     # Keep this as the 2nd line for mmf_init_setup

[ui]
username = $LC_HG_USERNAME
merge = emacs
paginate = never

[extensions]
# Builtin extensions:
rebase=
graphlog=
color=
record=
histedit=
strip=
#extdiff =
#mq =
#purge =
#transplant =
#evolve =
#amend =

[color]
custom.rev = red
custom.author = blue
custom.date = green
custom.branches = red

[merge-tools]
emacs.args = -q --eval "(ediff-merge-with-ancestor \""$local"\" \""$other"\" \""$base"\" nil \""$output"\")"

~/.hgignore

# Mercurial (hg) Init File; -*-Shell-script-*-
# dest = ~/.hgignore     # Keep this as the 2nd line for mmf_init_setup

syntax: glob

\.ipynb_checkpoints
*\.sage-jupyter2

~/.gitconfig

# Git Config File; -*-Shell-script-*-
# dest = ~/.gitconfig    # Keep this as the 2nd line for mmf_init_setup

[push]
    default = simple
[alias]
    lg1 = log --graph --abbrev-commit --decorate --date=relative --format=format:'%C(bold blue)%h%C(reset) - %C(bold green)(%ar)%C(reset) %C(white)%s%C(reset) %C(dim white)- %an%C(reset)%C(bold yellow)%d%C(reset)' --all
    lg2 = log --graph --abbrev-commit --decorate --format=format:'%C(bold blue)%h%C(reset) - %C(bold cyan)%aD%C(reset) %C(bold green)(%ar)%C(reset)%C(bold yellow)%d%C(reset)%n''          %C(white)%s%C(reset) %C(dim white)- %an%C(reset)' --all
    lg = !"git lg1"]

~/.mrconfig

Note: Until symlinks work again, I can't really used myrepos.

# myrepos (mr) Config File; -*-Shell-script-*-
# dest = ~/.mrconfig     # Keep this as the 2nd line for mmf_init_setup
#
# Requires the myrepos perl script to be installed which you can do with the
# following commands:
#
# mkdir -P ~/repositories
# git clone git://myrepos.branchable.com/ ~/repositories/myrepos
# pushd ~/repositories/myrepos; git checkout 52e2de0bdeb8b892c8b83fcad54543f874d4e5b8; popd
# ln -s ~/repositories/myrepos/mr ~/.local/bin/
#
# Also requires the mmfhg package which can be enabled by installing
# mmf_setup and running . mmf_setup from your .bash_aliases file.

include = cat "${MMFHG}/mrconfig"

[DEFAULT]
hg_out = hg out

Old Information (Out of Data)

The following information is recorded for historical purposes. It no longer applies to CoCalc.

Dropbox no longer works!

Dropbox dropped linux support for all file systems except ext4 which is not an option for CoCalc.

You can use Dropbox on CoCalc:

dropbox start -i

The first time you do this, it will download some files and you will need to provide a username etc. Note, as pointed out in the link, by default, dropbox wants to share everything. I am not sure of the best strategy for this, but chose to create a separate Dropbox account for the particular project I am using, then just adding the appropriate files to that account.

Once nice think about Dropbox is that it works well with symbolic links, so you can just symlink any files you want into the ~/Dropbox folder and everything should work fine, but hold off for a moment - first exclude the appropriate folders, then make the symlinks.

If you want to run Dropbox everytime you login, then add the following to your ~/.bash_aliases on CoCalc:

cat >> ~/.bash_aliases <<EOF

# Start Dropbox
dropbox start
EOF

This did not link with my dropbox account. I had to manually kill the dropbox process, then run the following:

.dropbox-dist/dropboxd

Dropbox and Version Control

Although tempting, one should not use Dropbox to share VCS files since corruption might occur. Instead, you can use Dropbox to sync the working directory (for collaborators) but use selective sync to ignore the sensitive VCS files. Here is one way to do this:

cd ~/Dropbox
find -L . -name ".hg" -exec dropbox exclude add {} \;
find -L . -name ".git" -exec dropbox exclude add {} \;

Note: This will remove the .hg directories! (The behaviour of excluding a file with selective sync is to remove it from the local Dropbox.) Thus, my recommendation is the following:

  1. First copy the files to the Dropbox folder using cp -r.
  2. Then run the previous find commands to exclude the .hg directories.
  3. Check this with dropbox exclude list.
  4. Turn off dropbox dropbox stop
  5. Now replace the folders with appropriate symlinks.
  6. Finally, start dropbox again dropbox start.

Here is a summary for example. I am assuming that you have a version controlled project called ~/paper that you want mirrored in your Dropbox folder:

dropbox start
cp -r ~/paper ~/Dropbox/
pushd Dropbox; find -L. -name ".hg" -exec dropbox exclude add {} \; ; popd
dropbox exclude list
dropbox stop
rm -r ~/Dropbox/paper
ln -s ~/paper ~/Dropbox/paper
dropbox start

Remember to do the same on your local computer!

Dropbox Issues

Occasionally there will be issues with dropbox. One of the issues may be when the host for the project changes (this happens for example when you add or remove member hosting for a project). To deal with this you might have to unlike or relink the computer to Dropbox:

  • Make sure your project has "Internet access" in Settings.
  • dropbox stop on the CoCalc server.
  • Sign into Dropbox with the account you have linked to the project. (Hint: if you can't remember the name, look in your main dropbox project where you should have shared this with the CoCalc project. It can be useful to use Incognito mode or a different browser to sign into an auxilliary account.)
  • Go to Settings (top right), then the Security tab.
  • In the Devices section, unlink the device.
  • Restart dropbox with ~/.dropbox-dist/dropboxd. This should give you a link to paste into the browser you have signed into and it will relink.
In [ ]:
 

Details about configuring Nikola

This notebook contains details about various commands and techniques that might obscure the main point in the other documents.

Debugging Nikola

Here are some tips for debugging Nikola when things go wrong.

  • Run with pdb support:

    NIKOLA_DEBUG=1 nikola build -v 2 --pdb
    

    This does not halt some errors though. You can force Nikola to halt on warnings with

    nikola build --strict
    

    This will give tracebacks about the warnings that might be hidden.

Clean Environment

To get as clean an environment as possible, I did the following:

$ conda create -n blog3 python=3
$ conda activate blog3
$ pip install Nikola

# Trying to run nikola build on my site gave errors requiring
# the following
# ModuleNotFoundError: No module named 'mmf_setup'
# ERROR: Nikola: In order to use this theme, you must install the "jinja2" Python package.
# ERROR: Nikola: In order to build this site (compile ipynb), you must install the "notebook>=4.0.0" Python package.

$ conda install mmf_setup jinja2 notebook

# Now we clean the environment, installing as much as possible with conda
$ conda install pipdeptree
$ pipdeptree | perl -nle'print if m{^\w+}'

Python 2 vs Python 3

As of 2020, Python 2 support has stopped, and all major libraries now support Python 3. This includes Mercurial as of version 5.2, our main reason for holding back. Thus, at this point, I strongly recommend starting from Python 3. The remaining material here is for historical reasons.

Discussions here:

One issue is that mercurial requires Python 2 and likely will not support Python 3 for some time. This means that we will have to get users to install Mercurial separately as part of their OS.

Since I often want control of Mercurial, I still would like to install it under conda. Here is how I do it:

  1. Install Miniconda (see below) with python=2 into /data/apps/anaconda/bin.
  2. conda install mercurial. This gives me a working version.
  3. conda create -n talent2012 python=3 anaconda. This is my python 3 working environment.
  4. Make sure that /data/apps/anaconda/bin appears at both the start and end of my path. For example:

    export PATH="/data/apps/anaconda/bin:$PATH:/data/apps/anaconda/bin/./"
    

This little trick ensures that when I activate a new environment, the default bin directory remains on my path after so that hg can fallback and still work:

$ . activate talent2015
discarding /data/apps/anaconda/bin from PATH
prepending /data/apps/anaconda/envs/talent2015/bin to PATH
(talent2015) $ echo $PATH
/data/apps/anaconda/envs/talent2015/bin:...:/data/apps/anaconda/bin/./

Conda

  • conda install seems to also do a conda update, so use this in examples if you are not certain the user has installed the package (in which case conda update will fail).

Clean Environment (for Developers)

If you are developing code, then it is important to have a clean environment so you don't accidentally depend on packages that you forget to tell the students to install. This can be done as follows. (Not everyone needs to add this complication – we just need to make sure that a few people test all the code in a clean environment.) Here is how I am doing this now (MMF):

  1. First I installed a clean version of Miniconda based on Python 2 so that I can use mercurial. This is the minimal package manager without any additional dependencies. The reason is that any environments you create will have access to the packages installed in the root (default) and I would like to be able to test code without the full anaconda distribution. Since we require the full Anaconda distribution here, it is fine to install it instead if you don't care about testing your code in clean environment.

    I install mine in a global location /data/apps/anaconda. (This path will appear in examples below. Modify to match your installation choice. I believe the default is your home directory ~/miniconda or ~/anaconda.) To use this, you need to add the appropriate directory to your path. The installer will offer to do this for you but if you customize your startup files, you might need to intervene a little. Make sure that something like this appears in your ~/.bash_profile or ~/.bashrc file:

    export PATH="/data/apps/anaconda/bin:$PATH:/data/apps/anaconda/bin/./"
    export DYLD_LIBRARY_PATH "/data/apps/anaconda/anaconda/lib:$DYLD_LIBRARY_PATH"
    

    (The library path is useful if you install shared libraries like the FFTW and want to manage them with conda. Someone with Linux expertise, please add the appropriate environmental exports for the linux platform.)

  2. Next I create various environments for working with different versions of python, etc. For this project I do:

    conda create -n py2 python=2
    conda create -n py3 python=3
    conda create -n anaconda2 anaconda
    conda create -n anaconda3 anaconda python=3
    conda create -n talent2015 python=3 anaconda
    

    This creates a new full anaconda distribution in the environment named talent2015. To use this you activate it with one of (requires bash or zsh):

    source activate talent2015
    # OR
    . activate talent2015         # The "." is a shortcut for "source"
    

    All this really does is modify your PATH to point to /data/apps/anaconda/envs/talent2015/bin. When you are finished, you can deactivate this with:

    . deactivate
    

New Posts

There appears to be no easy way to customize new posts. In particular, as of version 7.7.6 I tried to modify the template to insert <!-- TEASER_END --> after the content. I traced this to line 141 in nikola/plugins/compile/ipynb.py which inserts the content. The actual content is a translated version of "Write your post here." and everything is inserted programatically without any apparent opportunity for customization.

In order to customize this, I use a custom script for starting new posts:

# Creates a new blog post
function np ()
{
    cd /Users/mforbes/current/website/posts/private_blog/
    . activate blog
    nikola new_post -i  /Users/mforbes/current/website/new_post_content.md
    . activate work
}

Then I place the following content in the file /Users/mforbes/current/website/new_post_content.md:

![Figure Description](/images/Figure.png)
# Title
Write your post here!
<!-- TEASER_END -->

Note: In order for this to work, I needed to make sure that the markdown compiler was not disabled. To do this, I made sure that in my conf.py file, I had at least one reference to markdown (I did this in my pages list:)

...
PAGES = (
    ("pages/*.md", "", "story.tmpl"),
    ("pages/*.rst", "", "story.tmpl"),
    ("pages/*.txt", "", "story.tmpl"),
    ("pages/*.html", "", "story.tmpl"),
)
In [ ]:
 

Installing Nikola

Constrained Optimization

Here we briefly explore some of the SciPy tools for constrained optimization.

Read more…

Debugging with Functions

Debugging with Functions

IPython notebooks are great for interactive use. You can perform a bunch of calculations in the global scope, look at the results etc. If some code in a loop crashes, you can see the current values of all the variables etc.

However, working in the global scope is generally bad practice, especially in a notebook where you might be executing cells out of order. For example:

  • A variable you think you have defined might have actually been created later on, so if you run the notebook from scratch, it will break.
  • Variables might be overwritten later, and so rerunning code may have unexpected behaviour that depends sensitively on how you toyed with your notebook.

For these and many other reasons, it is often better to do certain calculations in a function. (This function can then be passed to other tools like profilers, timers, parallel computations, animation libraries etc.). However, now if something goes wrong, you need to fire up a debugger which does not work so well in a notebook.

My previous solution was to do something like the following once I started the debugger:

ipdb> !import sys;sys._l = locals()
ipdb> q

Then I can get out of the debuger and put all of the local variables in my environment:

import sys
locals().update(sys._l)

Decorating Functions for Debugging

Note: This is now implemented through the decorator mmfutils.debugging.debug with the only caveat that you must pass locals() explicitly.

I recently came across this question on SO: Python: Is there a way to get a local function variable from within a decorator that wraps it?. Here we explore this a bit.

In [7]:
import sys

class persistent_locals(object):
    """Decorator that stores the functions local variables 
    in an attribute `locals`.
    """
    def __init__(self, func):
        self._locals = {}
        self.func = func

    def __call__(self, *args, **kwargs):
        def tracer(frame, event, arg):
            if event=='return':
                self._locals = frame.f_locals.copy()

        # tracer is activated on next call, return or exception
        sys.setprofile(tracer)
        try:
            # trace the function call
            res = self.func(*args, **kwargs)
        finally:
            # disable tracer and replace with old one
            sys.setprofile(None)
        return res

    def clear_locals(self):
        self._locals = {}

    @property
    def locals(self):
        return self._locals

This seems to work well:

In [6]:
@persistent_locals
def func():
    local1 = 1
    local2 = 2

func()
print(func.locals)
{'local1': 1, 'local2': 2}

It also works if an exception is raised:

In [11]:
@persistent_locals
def func():
    local1 = 1
    raise Exception
    local2 = 2

try:
    func()
except:
    pass
print(func.locals)
{'local1': 1}

Now we can add this to a debug decorator that automatically adds the variables to the "global" scope.

In [15]:
def debug(locals):
    """Decorator to wrap a function and dump its local scope.
    
    Arguments
    ---------
    locals : dict
       Function's local variables will be updated in this dict.
       Use locals() if desired.
    """
    def decorator(f):
        func = persistent_locals(f)
        def wrapper(*v, **kw):
            try:
                res = func(*v, **kw)
            finally:
                locals.update(func.locals)
            return res
        return wrapper
    return decorator
In [20]:
def f():
    l1 = 1
    l2 = 2
    x = 1/0
    l3 = 3
    
f()
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-20-0923ea4950bf> in <module>()
      6     l3 = 3
      7 
----> 8 f()

<ipython-input-20-0923ea4950bf> in f()
      3     l1 = 1
      4     l2 = 2
----> 5     x = 1/0
      6     l3 = 3
      7 

ZeroDivisionError: integer division or modulo by zero
In [22]:
env = {}
debug(env)(f)()
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-22-4c12b3bd07a6> in <module>()
      1 env = {}
----> 2 debug(env)(f)()

<ipython-input-15-7e036ba47dfa> in wrapper(*v, **kw)
     12         def wrapper(*v, **kw):
     13             try:
---> 14                 res = func(*v, **kw)
     15             finally:
     16                 locals.update(func.locals)

<ipython-input-7-887d4c0d0705> in __call__(self, *args, **kwargs)
     18         try:
     19             # trace the function call
---> 20             res = self.func(*args, **kwargs)
     21         finally:
     22             # disable tracer and replace with old one

<ipython-input-20-0923ea4950bf> in f()
      3     l1 = 1
      4     l2 = 2
----> 5     x = 1/0
      6     l3 = 3
      7 

ZeroDivisionError: integer division or modulo by zero
In [23]:
env
Out[23]:
{'l1': 1, 'l2': 2}

This works, but I would rather not see all of the wrapper details in the exception.

In [40]:
def debug(locals=None):
    """Decorator to wrap a function and dump its local scope.
    
    Arguments
    ---------
    locals : dict
       Function's local variables will be updated in this dict.
       Use locals() if desired.
    """
    if locals is None:
        locals = globals()    
    def decorator(f):
        func = persistent_locals(f)
        def wrapper(*v, **kw):
            try:
                res = func(*v, **kw)
            except Exception, e:
                # Remove two levels of the traceback so we don't see the
                # decorator junk
                exception_type, exception, traceback = sys.exc_info()
                raise exception_type, exception, traceback.tb_next.tb_next
            finally:
                locals.update(func.locals)
            return res
        return wrapper
    return decorator
In [31]:
debug(env)(f)()
---------------------------------------------------------------------------
ZeroDivisionError                         Traceback (most recent call last)
<ipython-input-31-7ce7577b18e9> in <module>()
----> 1 debug(env)(f)()

<ipython-input-20-0923ea4950bf> in f()
      3     l1 = 1
      4     l2 = 2
----> 5     x = 1/0
      6     l3 = 3
      7 

ZeroDivisionError: integer division or modulo by zero

If we do not specify the environment, they appear in the global scope:

In [49]:
assert 'l1' not in locals()
try:
    debug()(f)()
except:
    pass
print(l1, l2)
del l1, l2
(1, 2)

Now the final touch would be to not have to call the decorator if we are not passing in the environment.

In [105]:
def debug(*v, **kw):
    """Decorator to wrap a function and dump its local scope.
    
    Arguments
    ---------
    locals (or env): dict
       Function's local variables will be updated in this dict.
       Use locals() if desired.
    """
    func = None
    env = kw.get('locals', kw.get('env', None))

    if len(v) == 0:
        pass
    elif len(v) == 1:
        if isinstance(v[0], dict):
            env = v[0]
        else:
            func = v[0]
    elif len(v) == 2:
        func, env = v
    else:
        raise ValueError("Must pass in either function or locals or both")
    
    if env is None:
        env = globals()

    def decorator(f):
        func = persistent_locals(f)
        def wrapper(*v, **kw):
            try:
                res = func(*v, **kw)
            except Exception, e:
                # Remove two levels of the traceback so we don't see the
                # decorator junk
                exception_type, exception, traceback = sys.exc_info()
                raise exception_type, exception, traceback.tb_next.tb_next
            finally:
                env.update(func.locals)
            return res
        return wrapper

    if func is None:
        return decorator
    else:
        return decorator(func)

Here are the four calling styles:

In [106]:
def f():
    l1 = 1
    l2 = 2
    return l1 + l2
    
env = {}
res = debug(f, env)(); print env, res
res = debug(f, locals=env)(); print env, res
res = debug(env)(f)(); print env, res
res = debug(locals=env)(f)(); print env, res
{'l2': 2, 'l1': 1} 3
{'l2': 2, 'l1': 1} 3
{'l2': 2, 'l1': 1} 3
{'l2': 2, 'l1': 1} 3

If you want to just update the global dict, there is one calling style:

In [107]:
assert 'l1' not in locals()
res = debug(f)()
print l1, l2, res
del l1, l2
1 2 3

And now as a decorator:

In [109]:
@debug
def f():
    l1 = 1
    l2 = 2
    return l1 + l2

env = {}

@debug(env=env)
def g():
    l1 = 1
    l2 = 2
    return l1 + l2

g()
print env
assert 'l1' not in locals()
f()
print l1, l2; del l1, l2
{'l2': 2, 'l1': 1}
1 2

Linux

Table of Contents

Some notes about installing Linux on a Dell minitower with a GPU and user policy. Note: this file is evolving into the following collaborative set of notes on CoCalc:

Read more…

Colormaps

In this post we collect some information about using colormaps for plotting. In particular, we pay attention to human perception.

Read more…

Conda Pip and All That

In this post I describe my strategy for managing source code projects and dependencies with python, Conda, [Pip][], etc. I also show how to to maintain your own "PyPI" for source code packages.

Read more…

SymPy for Code Generation

Code Generation with SymPy

I am often working with density functionals where equations of motion follow by varying an energy functional with respect to some densities. In this post I explore using SymPy as a tool for expressing these relationships and for generating the relevant code.

Read more…

Pade Approximants

Padé Approximants

A Padé approximant is a rational functions $A(x) = P(x)/Q(x)$ where $P(x)$ and $Q(x)$ are polinomials. The provide an alternative to Taylor series, often with better behaviour. Here we consider using them to model functions with well-defined behaviour both at small and large parameter values.

Read more…

Many-body Quantum Mechanics

Many-body Quantum Mechanics

In this notebook, we briefly discuss the formalism of many-body theory from the point of view of quantum mechanics.

Read more…

Distributing Python Packages

Distributing Python Packages

I have reached a point at which I find myself duplicating certain bits of code in various projects and now want to refactor this code into a common package. This post documents the process and contains some links to useful information about how to cleanly distribute a python package. This should culminate in the following clean packages for use in many of my subsequent projects:

Read more…

3d-visualization

3D Visualization

Here I discuss my learning about making 3D visualizations of scientific simulation data. There are two goals: interactive visualization to understand the data, and production of beautiful movies and plots for publication/press coverage etc. $\newcommand{\d}{\mathrm{d}}\newcommand{\vect}[1]{\vec{#1}}\newcommand{\abs}[1]{\lvert#1\rvert}\newcommand{\I}{\mathrm{i}}\newcommand{\braket}[1]{\langle#1\rangle}\newcommand{\ket}[1]{\left|#1\right\rangle}\newcommand{\bra}[1]{\left\langle#1\right|}\newcommand{\pdiff}[2]{\frac{\partial #1}{\partial #2}}\newcommand{\diff}[3][]{\frac{\d^{#1}#2}{\d{#3}{}^{#1}}}$

Tools

Here is a short list of some relevant tools:

  • matplotlib: Although the 3D visualization aspects are very primative and slow, one can often get a good understanding of a 3D data-set using the 2D primatives. For example, a collection of 2D projections or slices can often give a good idea of how a simulation is evolving. Contour plots or colour can be used to visualize a third dimension.
  • MayaVi2: This is a VTK-based visualization toolkit that can be used for interactive 3D visualization. There is a GUI where you can manually edit and interact with the scene, as well as a rudimentary command-line system for automatically generating visualizations. One problem with all VTK visualizations is that they expect the aspect ratio of the various coordinates to be the same (it was designed to visualize objects in real space, like a CAT-scan of a head). I have found it challenging to visualize data with very different scales in a natural way.
  • Blender: Supposedly a very good piece of software for constructing scenes i.e. adding lighting sources, textures, etc. I am hoping to be able to insert data into a scene to generate good-looking visualizations.
  • YafaRay, Luxrender: These are ray-traceing engines that integrate with Blender for rendering a scene.

Read more…

coroutines

Coroutines

Consider the following skeleton for a root-finding code based on the secant method:

Read more…

MayaVi

MayaVi for Data Analysis

Our goal here is to use MayaVi to effectively analyze 3D datasets. We shall show how to save the data to an HDF5 file, then load it for visualization in MayaVi. We assume that the data sets are large, and so try to use data sources that only load the required data into memory. Our data will be simple: rectagular meshes.

Read more…

Optimization with Numba

Optimization with Numba

This notebook discussus how to achive high performance of vectorized operations on large chunks of data with numba.

Read more…

cylindrical-dvr

In [1]:
import mmf_setup;mmf_setup.nbinit('default')

This cell contains some definitions for equations. If things look a bit strange, please try the following:

  • Choose "Trust Notebook" from the "File" menu.
  • Re-execute this cell.
  • Reload the notebook.

1. Cylindrical DVR Basis

Here we discuss the DVR discretization for systems with cylindrical symmetry. For this, one can use the $d=2$ Bessel function basis. Here we specifically discuss and demonstrate how to work with this particular basis as it demonstrates a few potential gotchas. First we give a very brief review of the DVR basis and then we demonstrate the cylindrical basis.

Read more…