sources#

class AjaxDataSource(*args: Any, id: ID | None = None, **kwargs: Any)[source]#

A data source that can populate columns by making Ajax calls to REST endpoints.

The AjaxDataSource can be especially useful if you want to make a standalone document (i.e. not backed by the Bokeh server) that can still dynamically update using an existing REST API.

The response from the REST API should match the .data property of a standard ColumnDataSource, i.e. a JSON dict that maps names to arrays of values:

{
    'x' : [1, 2, 3, ...],
    'y' : [9, 3, 2, ...]
}

Alternatively, if the REST API returns a different format, a CustomJS callback can be provided to convert the REST response into Bokeh format, via the adapter property of this data source.

Initial data can be set by specifying the data property directly. This is necessary when used in conjunction with a FactorRange, even if the columns in data` are empty.

A full example can be seen at examples/basic/data/ajax_source.py

content_type#

Set the “contentType” parameter for the Ajax request.

http_headers#

Specify HTTP headers to set for the Ajax request.

Example:

ajax_source.headers = { 'x-my-custom-header': 'some value' }
if_modified#

Whether to include an If-Modified-Since header in Ajax requests to the server. If this header is supported by the server, then only new data since the last request will be returned.

method#

Specify the HTTP method to use for the Ajax request (GET or POST)

polling_interval#

A polling interval (in milliseconds) for updating data source.

class CDSView(*args: Any, id: ID | None = None, **kwargs: Any)[source]#

A view into a ColumnDataSource that represents a row-wise subset.

filter#

Defines the subset of indices to use from the data source this view applies to.

By default all indices are used (AllIndices filter). This can be changed by using specialized filters like IndexFilter, BooleanFilter, etc. Filters can be composed using set operations to create non-trivial data masks. This can be accomplished by directly using models like InversionFilter, UnionFilter, etc., or by using set operators on filters, e.g.:

# filters everything but indexes 10 and 11
cds_view.filter &= ~IndexFilter(indices=[10, 11])
class ColumnDataSource(*args: Any, id: ID | None = None, **kwargs: Any)[source]#

Maps names of columns to sequences or arrays.

The ColumnDataSource is a fundamental data structure of Bokeh. Most plots, data tables, etc. will be driven by a ColumnDataSource.

If the ColumnDataSource initializer is called with a single argument that can be any of the following:

  • A Python dict that maps string names to sequences of values, e.g. lists, arrays, etc.

    data = {'x': [1,2,3,4], 'y': np.array([10.0, 20.0, 30.0, 40.0])}
    
    source = ColumnDataSource(data)
    

Note

ColumnDataSource only creates a shallow copy of data. Use e.g. ColumnDataSource(copy.deepcopy(data)) if initializing from another ColumnDataSource.data object that you want to keep independent.

  • A Pandas DataFrame object

    source = ColumnDataSource(df)
    

    In this case the CDS will have columns corresponding to the columns of the DataFrame. If the DataFrame columns have multiple levels, they will be flattened using an underscore (e.g. level_0_col_level_1_col). The index of the DataFrame will be flattened to an Index of tuples if it’s a MultiIndex, and then reset using reset_index. The result will be a column with the same name if the index was named, or level_0_name_level_1_name if it was a named MultiIndex. If the Index did not have a name or the MultiIndex name could not be flattened/determined, the reset_index function will name the index column index, or level_0 if the name index is not available.

  • A Pandas GroupBy object

    group = df.groupby(('colA', 'ColB'))
    

    In this case the CDS will have columns corresponding to the result of calling group.describe(). The describe method generates columns for statistical measures such as mean and count for all the non-grouped original columns. The CDS columns are formed by joining original column names with the computed measure. For example, if a DataFrame has columns 'year' and 'mpg'. Then passing df.groupby('year') to a CDS will result in columns such as 'mpg_mean'

    If the GroupBy.describe result has a named index column, then CDS will also have a column with this name. However, if the index name (or any subname of a MultiIndex) is None, then the CDS will have a column generically named index for the index.

    Note this capability to adapt GroupBy objects may only work with Pandas >=0.20.0.

Note

There is an implicit assumption that all the columns in a given ColumnDataSource all have the same length at all times. For this reason, it is usually preferable to update the .data property of a data source “all at once”.

classmethod from_df(data: pd.DataFrame) DataDict[source]#

Create a dict of columns from a Pandas DataFrame, suitable for creating a ColumnDataSource.

Parameters:

data (DataFrame) – data to convert

Returns:

dict[str, np.array]

classmethod from_groupby(data: pd.core.groupby.GroupBy) DataDict[source]#

Create a dict of columns from a Pandas GroupBy, suitable for creating a ColumnDataSource.

The data generated is the result of running describe on the group.

Parameters:

data (Groupby) – data to convert

Returns:

dict[str, np.array]

__init__(data: DataDict | pd.DataFrame | GroupBy[Any], **kwargs: Any) None[source]#
__init__(**kwargs: Any) None

If called with a single argument that is a dict, dataclass, or pandas.DataFrame, treat that implicitly as the “data” attribute.

add(data: Sequence[Any], name: str | None = None) str[source]#

Appends a new column of data to the data source.

Parameters:
  • data (seq) – new data to add

  • name (str, optional) – column name to use. If not supplied, generate a name of the form “Series ####”

Returns:

the column name used

Return type:

str

patch(patches: Patches, setter: Setter | None = None) None[source]#

Efficiently update data source columns at specific locations

If it is only necessary to update a small subset of data in a ColumnDataSource, this method can be used to efficiently update only the subset, instead of requiring the entire data set to be sent.

This method should be passed a dictionary that maps column names to lists of tuples that describe a patch change to apply. To replace individual items in columns entirely, the tuples should be of the form:

(index, new_value)  # replace a single column value

# or

(slice, new_values) # replace several column values

Values at an index or slice will be replaced with the corresponding new values.

In the case of columns whose values are other arrays or lists, (e.g. image or patches glyphs), it is also possible to patch “subregions”. In this case the first item of the tuple should be a whose first element is the index of the array item in the CDS patch, and whose subsequent elements are integer indices or slices into the array item:

# replace the entire 10th column of the 2nd array:

  +----------------- index of item in column data source
  |
  |       +--------- row subindex into array item
  |       |
  |       |       +- column subindex into array item
  V       V       V
([2, slice(None), 10], new_values)

Imagining a list of 2d NumPy arrays, the patch above is roughly equivalent to:

data = [arr1, arr2, ...]  # list of 2d arrays

data[2][:, 10] = new_data

There are some limitations to the kinds of slices and data that can be accepted.

  • Negative start, stop, or step values for slices will result in a ValueError.

  • In a slice, start > stop will result in a ValueError

  • When patching 1d or 2d subitems, the subitems must be NumPy arrays.

  • New values must be supplied as a flattened one-dimensional array of the appropriate size.

Parameters:

patches (dict[str, list[tuple]]) – lists of patches for each column

Returns:

None

Raises:

ValueError

Example:

The following example shows how to patch entire column elements. In this case,

source = ColumnDataSource(data=dict(foo=[10, 20, 30], bar=[100, 200, 300]))

patches = {
    'foo' : [ (slice(2), [11, 12]) ],
    'bar' : [ (0, 101), (2, 301) ],
}

source.patch(patches)

After this operation, the value of the source.data will be:

dict(foo=[11, 12, 30], bar=[101, 200, 301])

For a more comprehensive example, see examples/server/app/patch_app.py.

remove(name: str) None[source]#

Remove a column of data.

Parameters:

name (str) – name of the column to remove

Returns:

None

Note

If the column name does not exist, a warning is issued.

stream(new_data: DataDict, rollover: int | None = None) None[source]#

Efficiently update data source columns with new append-only data.

In cases where it is necessary to update data columns in, this method can efficiently send only the new data, instead of requiring the entire data set to be re-sent.

Parameters:
  • new_data (dict[str, seq]) –

    a mapping of column names to sequences of new data to append to each column.

    All columns of the data source must be present in new_data, with identical-length append data.

  • rollover (int, optional) – A maximum column size, above which data from the start of the column begins to be discarded. If None, then columns will continue to grow unbounded (default: None)

Returns:

None

Raises:

ValueError

Example:

source = ColumnDataSource(data=dict(foo=[], bar=[]))

# has new, identical-length updates for all columns in source
new_data = {
    'foo' : [10, 20],
    'bar' : [100, 200],
}

source.stream(new_data)
to_df() pd.DataFrame[source]#

Convert this data source to pandas DataFrame.

Returns:

DataFrame

property column_names: list[str]#

A list of the column names in this data source.

data#

Mapping of column names to sequences of data. The columns can be, e.g, Python lists or tuples, NumPy arrays, etc.

The .data attribute can also be set from dataclass, Pandas DataFrames, or GroupBy objects. In these cases, the behaviour is identical to passing the objects to the ColumnDataSource initializer.

property length: int#

All columns have the same number of row entries.

Type:

Number of row entries in the data. Note

class ColumnarDataSource(*args: Any, id: ID | None = None, **kwargs: Any)[source]#
A base class for data source types, which can be mapped onto

a columnar format.

Note

This is an abstract base class used to help organize the hierarchy of Bokeh model types. It is not useful to instantiate on its own.

default_values#

Defines the default value for each column.

This is used when inserting rows into a data source, e.g. by edit tools, when a value for a given column is not explicitly provided. If a default value is missing, a tool will defer to its own configuration or will try to let the data source to infer a sensible default value.

selection_policy#

An instance of a SelectionPolicy that determines how selections are set.

class DataSource(*args: Any, id: ID | None = None, **kwargs: Any)[source]#

A base class for data source types.

Note

This is an abstract base class used to help organize the hierarchy of Bokeh model types. It is not useful to instantiate on its own.

selected#

An instance of a Selection that indicates selected indices on this DataSource. This is a read-only property. You may only change the attributes of this object to change the selection (e.g., selected.indices).

class GeoJSONDataSource(*args: Any, id: ID | None = None, **kwargs: Any)[source]#
geojson#

GeoJSON that contains features for plotting. Currently GeoJSONDataSource can only process a FeatureCollection or GeometryCollection.

class ServerSentDataSource(*args: Any, id: ID | None = None, **kwargs: Any)[source]#

A data source that can populate columns by receiving server sent events endpoints.

class WebDataSource(*args: Any, id: ID | None = None, **kwargs: Any)[source]#
Base class for web column data sources that can update from data

URLs.

Note

This base class is typically not useful to instantiate on its own.

Note

This is an abstract base class used to help organize the hierarchy of Bokeh model types. It is not useful to instantiate on its own.

adapter#

A JavaScript callback to adapt raw JSON responses to Bokeh ColumnDataSource format.

If provided, this callback is executes immediately after the JSON data is received, but before appending or replacing data in the data source. The CustomJS callback will receive the AjaxDataSource as cb_obj and will receive the raw JSON response as cb_data.response. The callback code should return a data object suitable for a Bokeh ColumnDataSource (i.e. a mapping of string column names to arrays of data).

data_url#

A URL to to fetch data from.

max_size#

Maximum size of the data columns. If a new fetch would result in columns larger than max_size, then earlier data is dropped to make room.

mode#

Whether to append new data to existing data (up to max_size), or to replace existing data entirely.