docs: fix typos
found via `codespell -L copie,datas,pres,fo,tooks,noo,ue,ket,frop`
This commit is contained in:
parent
919c84fb5a
commit
d2ef23fcb4
50 changed files with 69 additions and 69 deletions
|
@ -17,7 +17,7 @@ General/my.core changes:
|
||||||
- 746c3da0cadcba3b179688783186d8a0bd0999c5 core.pandas: allow specifying schema; add tests
|
- 746c3da0cadcba3b179688783186d8a0bd0999c5 core.pandas: allow specifying schema; add tests
|
||||||
- 5313984d8fea2b6eef6726b7b346c1f4316acd01 add `tmp_config` context manager for test & adhoc patching
|
- 5313984d8fea2b6eef6726b7b346c1f4316acd01 add `tmp_config` context manager for test & adhoc patching
|
||||||
- df9a7f7390aee6c69f1abf1c8d1fc7659ebb957c core.pandas: add check for 'error' column + add empty one by default
|
- df9a7f7390aee6c69f1abf1c8d1fc7659ebb957c core.pandas: add check for 'error' column + add empty one by default
|
||||||
- e81dddddf083ffd81aa7e2b715bd34f59949479c proprely resolve class properties in make_config + add test
|
- e81dddddf083ffd81aa7e2b715bd34f59949479c properly resolve class properties in make_config + add test
|
||||||
|
|
||||||
Modules:
|
Modules:
|
||||||
- some innitial work on filling **InfluxDB** with HPI data
|
- some innitial work on filling **InfluxDB** with HPI data
|
||||||
|
|
|
@ -4,7 +4,7 @@ note: this doc is in progress
|
||||||
|
|
||||||
- interoperable
|
- interoperable
|
||||||
|
|
||||||
# note: this link doesnt work in org, but does for the github preview
|
# note: this link doesn't work in org, but does for the github preview
|
||||||
This is the main motivation and [[file:../README.org#why][why]] I created HPI in the first place.
|
This is the main motivation and [[file:../README.org#why][why]] I created HPI in the first place.
|
||||||
|
|
||||||
Ideally it should be possible to hook into anything you can imagine -- regardless the database/programming language/etc.
|
Ideally it should be possible to hook into anything you can imagine -- regardless the database/programming language/etc.
|
||||||
|
|
|
@ -190,7 +190,7 @@ For an extensive/complex example, you can check out ~@seanbreckenridge~'s [[http
|
||||||
fast: bool = True
|
fast: bool = True
|
||||||
|
|
||||||
# sort locations by date
|
# sort locations by date
|
||||||
# incase multiple sources provide them out of order
|
# in case multiple sources provide them out of order
|
||||||
sort_locations: bool = True
|
sort_locations: bool = True
|
||||||
|
|
||||||
# if the accuracy for the location is more than 5km (this
|
# if the accuracy for the location is more than 5km (this
|
||||||
|
|
|
@ -113,7 +113,7 @@ Not all HPI Modules are currently at that level of complexity -- some are simple
|
||||||
|
|
||||||
A related concern is how to structure namespace packages to allow users to easily extend them, and how this conflicts with single file modules (Keep reading below for more information on namespace packages/extension) If a module is converted from a single file module to a namespace with multiple files, it seems this is a breaking change, see [[https://github.com/karlicoss/HPI/issues/89][#89]] for an example of this. The current workaround is to leave it a regular python package with an =__init__.py= for some amount of time and send a deprecation warning, and then eventually remove the =__init__.py= file to convert it into a namespace package. For an example, see the [[https://github.com/karlicoss/HPI/blob/8422c6e420f5e274bd1da91710663be6429c666c/my/reddit/__init__.py][reddit init file]].
|
A related concern is how to structure namespace packages to allow users to easily extend them, and how this conflicts with single file modules (Keep reading below for more information on namespace packages/extension) If a module is converted from a single file module to a namespace with multiple files, it seems this is a breaking change, see [[https://github.com/karlicoss/HPI/issues/89][#89]] for an example of this. The current workaround is to leave it a regular python package with an =__init__.py= for some amount of time and send a deprecation warning, and then eventually remove the =__init__.py= file to convert it into a namespace package. For an example, see the [[https://github.com/karlicoss/HPI/blob/8422c6e420f5e274bd1da91710663be6429c666c/my/reddit/__init__.py][reddit init file]].
|
||||||
|
|
||||||
Its quite a pain to have to convert a file from a single file module to a namespace module, so if theres *any* possibility that you might convert it to a namespace package, might as well just start it off as one, to avoid the pain down the road. As an example, say you were creating something to parse ~zsh~ history. Instead of creating ~my/zsh.py~, it would be better to create ~my/zsh/parser.py~. That lets users override the file using editable/namespace packages, and it also means in the future its much more trivial to extend it to something like:
|
Its quite a pain to have to convert a file from a single file module to a namespace module, so if there's *any* possibility that you might convert it to a namespace package, might as well just start it off as one, to avoid the pain down the road. As an example, say you were creating something to parse ~zsh~ history. Instead of creating ~my/zsh.py~, it would be better to create ~my/zsh/parser.py~. That lets users override the file using editable/namespace packages, and it also means in the future its much more trivial to extend it to something like:
|
||||||
|
|
||||||
#+begin_src
|
#+begin_src
|
||||||
my/zsh
|
my/zsh
|
||||||
|
@ -161,7 +161,7 @@ There's no requirement to follow this entire structure when you start off, the e
|
||||||
|
|
||||||
Note: this section covers some of the complexities and benefits with this being a namespace package and/or editable install, so it assumes some familiarity with python/imports
|
Note: this section covers some of the complexities and benefits with this being a namespace package and/or editable install, so it assumes some familiarity with python/imports
|
||||||
|
|
||||||
HPI is installed as a namespace package, which allows an additional way to add your own modules. For the details on namespace packges, see [[https://www.python.org/dev/peps/pep-0420/][PEP420]], or the [[https://packaging.python.org/guides/packaging-namespace-packages][packaging docs for a summary]], but for our use case, a sufficient description might be: Namespace packages let you split a package across multiple directories on disk.
|
HPI is installed as a namespace package, which allows an additional way to add your own modules. For the details on namespace packages, see [[https://www.python.org/dev/peps/pep-0420/][PEP420]], or the [[https://packaging.python.org/guides/packaging-namespace-packages][packaging docs for a summary]], but for our use case, a sufficient description might be: Namespace packages let you split a package across multiple directories on disk.
|
||||||
|
|
||||||
Without adding a bulky/boilerplate-y plugin framework to HPI, as that increases the barrier to entry, [[https://packaging.python.org/guides/creating-and-discovering-plugins/#using-namespace-packages][namespace packages offers an alternative]] with little downsides.
|
Without adding a bulky/boilerplate-y plugin framework to HPI, as that increases the barrier to entry, [[https://packaging.python.org/guides/creating-and-discovering-plugins/#using-namespace-packages][namespace packages offers an alternative]] with little downsides.
|
||||||
|
|
||||||
|
|
|
@ -452,7 +452,7 @@ connect the data with other apps and libraries!
|
||||||
|
|
||||||
See more in [[file:../README.org::#how-do-you-use-it]["How do you use it?"]] section.
|
See more in [[file:../README.org::#how-do-you-use-it]["How do you use it?"]] section.
|
||||||
|
|
||||||
Also check out [[https://beepb00p.xyz/myinfra.html#hpi][my personal infrastructure map]] to see wher I'm using HPI.
|
Also check out [[https://beepb00p.xyz/myinfra.html#hpi][my personal infrastructure map]] to see where I'm using HPI.
|
||||||
|
|
||||||
* Adding/modifying modules
|
* Adding/modifying modules
|
||||||
# TODO link to 'overlays' documentation?
|
# TODO link to 'overlays' documentation?
|
||||||
|
|
|
@ -21,7 +21,7 @@ check '2011-05-12 Thu 17:51.*set ><'
|
||||||
# this would probs be from twint or something?
|
# this would probs be from twint or something?
|
||||||
check '2013-06-01 Sat 18:48.*<inputfile'
|
check '2013-06-01 Sat 18:48.*<inputfile'
|
||||||
|
|
||||||
|
|
||||||
# https://twitter.com/karlicoss/status/363703394201894912
|
# https://twitter.com/karlicoss/status/363703394201894912
|
||||||
# the quoted acc was suspended and the tweet is only present in archives?
|
# the quoted acc was suspended and the tweet is only present in archives?
|
||||||
check '2013-08-03 Sat 16:50.*удивительно, как в одном человеке'
|
check '2013-08-03 Sat 16:50.*удивительно, как в одном человеке'
|
||||||
|
@ -46,7 +46,7 @@ check '2016-12-13 Tue 20:23.*TIL:.*pypi.python.org/pypi/coloredlogs'
|
||||||
|
|
||||||
|
|
||||||
# https://twitter.com/karlicoss/status/472151454044917761
|
# https://twitter.com/karlicoss/status/472151454044917761
|
||||||
# archive isn't explaning images by default
|
# archive isn't expanding images by default
|
||||||
check '2014-05-29 Thu 23:04.*Выколол сингулярность.*pic.twitter.com/M6XRN1n7KW'
|
check '2014-05-29 Thu 23:04.*Выколол сингулярность.*pic.twitter.com/M6XRN1n7KW'
|
||||||
|
|
||||||
|
|
||||||
|
@ -76,7 +76,7 @@ check '2014-12-31 Wed 21:00.*2015 заебал'
|
||||||
check '2021-05-14 Fri 21:08.*RT @SNunoPerez: Me explaining Rage.*'
|
check '2021-05-14 Fri 21:08.*RT @SNunoPerez: Me explaining Rage.*'
|
||||||
|
|
||||||
|
|
||||||
# make sure there is a single occurence (hence, correct tzs)
|
# make sure there is a single occurrence (hence, correct tzs)
|
||||||
check 'A short esoteric Python'
|
check 'A short esoteric Python'
|
||||||
# https://twitter.com/karlicoss/status/1499174823272099842
|
# https://twitter.com/karlicoss/status/1499174823272099842
|
||||||
check 'It would be a really good time for countries'
|
check 'It would be a really good time for countries'
|
||||||
|
|
|
@ -77,7 +77,7 @@ def entries() -> Iterable[Entry]:
|
||||||
if len(inps) == 0:
|
if len(inps) == 0:
|
||||||
cmds = [base] # rely on default
|
cmds = [base] # rely on default
|
||||||
else:
|
else:
|
||||||
# otherise, 'merge' them
|
# otherwise, 'merge' them
|
||||||
cmds = [base + ['--logfile', f] for f in inps]
|
cmds = [base + ['--logfile', f] for f in inps]
|
||||||
|
|
||||||
import ijson.backends.yajl2_cffi as ijson # type: ignore
|
import ijson.backends.yajl2_cffi as ijson # type: ignore
|
||||||
|
|
|
@ -146,7 +146,7 @@ def dataframe() -> DataFrameT:
|
||||||
# todo careful about 'how'? we need it to preserve the errors
|
# todo careful about 'how'? we need it to preserve the errors
|
||||||
# maybe pd.merge is better suited for this??
|
# maybe pd.merge is better suited for this??
|
||||||
df = edf.join(mdf, how='outer', rsuffix='_manual')
|
df = edf.join(mdf, how='outer', rsuffix='_manual')
|
||||||
# todo reindex? so we dont' have Nan leftovers
|
# todo reindex? so we don't have Nan leftovers
|
||||||
|
|
||||||
# todo set date anyway? maybe just squeeze into the index??
|
# todo set date anyway? maybe just squeeze into the index??
|
||||||
noendo = df['error'] == NO_ENDOMONDO
|
noendo = df['error'] == NO_ENDOMONDO
|
||||||
|
|
|
@ -59,7 +59,7 @@ class Commit:
|
||||||
committed_dt: datetime
|
committed_dt: datetime
|
||||||
authored_dt: datetime
|
authored_dt: datetime
|
||||||
message: str
|
message: str
|
||||||
repo: str # TODO put canonical name here straightaway??
|
repo: str # TODO put canonical name here straight away??
|
||||||
sha: str
|
sha: str
|
||||||
ref: Optional[str] = None
|
ref: Optional[str] = None
|
||||||
# TODO filter so they are authored by me
|
# TODO filter so they are authored by me
|
||||||
|
|
|
@ -143,7 +143,7 @@ def config_ok() -> bool:
|
||||||
else:
|
else:
|
||||||
info(f'import order: {paths}')
|
info(f'import order: {paths}')
|
||||||
|
|
||||||
# first try doing as much as possible without actually imporing my.config
|
# first try doing as much as possible without actually importing my.config
|
||||||
from .preinit import get_mycfg_dir
|
from .preinit import get_mycfg_dir
|
||||||
cfg_path = get_mycfg_dir()
|
cfg_path = get_mycfg_dir()
|
||||||
# alternative is importing my.config and then getting cfg_path from its __file__/__path__
|
# alternative is importing my.config and then getting cfg_path from its __file__/__path__
|
||||||
|
@ -267,7 +267,7 @@ def modules_check(*, verbose: bool, list_all: bool, quick: bool, for_modules: Li
|
||||||
# todo more specific command?
|
# todo more specific command?
|
||||||
error(f'{click.style("FAIL", fg="red")}: {m:<50} loading failed{vw}')
|
error(f'{click.style("FAIL", fg="red")}: {m:<50} loading failed{vw}')
|
||||||
# check that this is an import error in particular, not because
|
# check that this is an import error in particular, not because
|
||||||
# of a ModuleNotFoundError because some dependency wasnt installed
|
# of a ModuleNotFoundError because some dependency wasn't installed
|
||||||
if isinstance(e, (ImportError, AttributeError)):
|
if isinstance(e, (ImportError, AttributeError)):
|
||||||
warn_my_config_import_error(e)
|
warn_my_config_import_error(e)
|
||||||
if verbose:
|
if verbose:
|
||||||
|
@ -441,7 +441,7 @@ def _locate_functions_or_prompt(qualified_names: List[str], prompt: bool = True)
|
||||||
from .query import locate_qualified_function, QueryException
|
from .query import locate_qualified_function, QueryException
|
||||||
from .stats import is_data_provider
|
from .stats import is_data_provider
|
||||||
|
|
||||||
# if not connected to a terminal, cant prompt
|
# if not connected to a terminal, can't prompt
|
||||||
if not sys.stdout.isatty():
|
if not sys.stdout.isatty():
|
||||||
prompt = False
|
prompt = False
|
||||||
|
|
||||||
|
@ -471,7 +471,7 @@ def _locate_functions_or_prompt(qualified_names: List[str], prompt: bool = True)
|
||||||
else:
|
else:
|
||||||
choices = [f.__name__ for f in data_providers]
|
choices = [f.__name__ for f in data_providers]
|
||||||
if prompt is False:
|
if prompt is False:
|
||||||
# theres more than one possible data provider in this module,
|
# there's more than one possible data provider in this module,
|
||||||
# STDOUT is not a TTY, can't prompt
|
# STDOUT is not a TTY, can't prompt
|
||||||
eprint("During fallback, more than one possible data provider, can't prompt since STDOUT is not a TTY")
|
eprint("During fallback, more than one possible data provider, can't prompt since STDOUT is not a TTY")
|
||||||
eprint("Specify one of:")
|
eprint("Specify one of:")
|
||||||
|
@ -576,7 +576,7 @@ def main(debug: bool) -> None:
|
||||||
# acts as a contextmanager of sorts - any subcommand will then run
|
# acts as a contextmanager of sorts - any subcommand will then run
|
||||||
# in something like /tmp/hpi_temp_dir
|
# in something like /tmp/hpi_temp_dir
|
||||||
# to avoid importing relative modules by accident during development
|
# to avoid importing relative modules by accident during development
|
||||||
# maybe can be removed later if theres more test coverage/confidence that nothing
|
# maybe can be removed later if there's more test coverage/confidence that nothing
|
||||||
# would happen?
|
# would happen?
|
||||||
|
|
||||||
# use a particular directory instead of a random one, since
|
# use a particular directory instead of a random one, since
|
||||||
|
|
|
@ -433,7 +433,7 @@ def warn_if_empty(f):
|
||||||
QUICK_STATS = False
|
QUICK_STATS = False
|
||||||
|
|
||||||
|
|
||||||
# incase user wants to use the stats functions/quick option
|
# in case user wants to use the stats functions/quick option
|
||||||
# elsewhere -- can use this decorator instead of editing
|
# elsewhere -- can use this decorator instead of editing
|
||||||
# the global state directly
|
# the global state directly
|
||||||
@contextmanager
|
@contextmanager
|
||||||
|
|
|
@ -127,7 +127,7 @@ else:
|
||||||
TypedDict = Dict
|
TypedDict = Dict
|
||||||
|
|
||||||
|
|
||||||
# bisect_left doesnt have a 'key' parameter (which we use)
|
# bisect_left doesn't have a 'key' parameter (which we use)
|
||||||
# till python3.10
|
# till python3.10
|
||||||
if sys.version_info[:2] <= (3, 9):
|
if sys.version_info[:2] <= (3, 9):
|
||||||
from typing import List, TypeVar, Any, Optional, Callable
|
from typing import List, TypeVar, Any, Optional, Callable
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
"""
|
"""
|
||||||
A helper module for defining denylists for sources programatically
|
A helper module for defining denylists for sources programmatically
|
||||||
(in lamens terms, this lets you remove some output from a module you don't want)
|
(in lamens terms, this lets you remove some output from a module you don't want)
|
||||||
|
|
||||||
For docs, see doc/DENYLIST.md
|
For docs, see doc/DENYLIST.md
|
||||||
|
|
|
@ -119,7 +119,7 @@ def _extract_requirements(a: ast.Module) -> Requires:
|
||||||
elif isinstance(c, ast.Str):
|
elif isinstance(c, ast.Str):
|
||||||
deps.append(c.s)
|
deps.append(c.s)
|
||||||
else:
|
else:
|
||||||
raise RuntimeError(f"Expecting string contants only in {REQUIRES} declaration")
|
raise RuntimeError(f"Expecting string constants only in {REQUIRES} declaration")
|
||||||
return tuple(deps)
|
return tuple(deps)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
'''
|
'''
|
||||||
A hook to insert user's config directory into Python's search path.
|
A hook to insert user's config directory into Python's search path.
|
||||||
|
|
||||||
Ideally that would be in __init__.py (so it's executed without having to import explicityly)
|
Ideally that would be in __init__.py (so it's executed without having to import explicitly)
|
||||||
But, with namespace packages, we can't have __init__.py in the parent subpackage
|
But, with namespace packages, we can't have __init__.py in the parent subpackage
|
||||||
(see http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html#the-init-py-trap)
|
(see http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html#the-init-py-trap)
|
||||||
|
|
||||||
|
|
|
@ -46,7 +46,7 @@ def _zstd_open(path: Path, *args, **kwargs) -> IO:
|
||||||
# TODO use the 'dependent type' trick for return type?
|
# TODO use the 'dependent type' trick for return type?
|
||||||
def kopen(path: PathIsh, *args, mode: str='rt', **kwargs) -> IO:
|
def kopen(path: PathIsh, *args, mode: str='rt', **kwargs) -> IO:
|
||||||
# just in case, but I think this shouldn't be necessary anymore
|
# just in case, but I think this shouldn't be necessary anymore
|
||||||
# since when we cann .read_text, encoding is passed already
|
# since when we call .read_text, encoding is passed already
|
||||||
if mode in {'r', 'rt'}:
|
if mode in {'r', 'rt'}:
|
||||||
encoding = kwargs.get('encoding', 'utf8')
|
encoding = kwargs.get('encoding', 'utf8')
|
||||||
else:
|
else:
|
||||||
|
|
|
@ -145,7 +145,7 @@ class CollapseDebugHandler(logging.StreamHandler):
|
||||||
import os
|
import os
|
||||||
columns, _ = os.get_terminal_size(0)
|
columns, _ = os.get_terminal_size(0)
|
||||||
# ugh. the columns thing is meh. dunno I guess ultimately need curses for that
|
# ugh. the columns thing is meh. dunno I guess ultimately need curses for that
|
||||||
# TODO also would be cool to have a terminal post-processor? kinda like tail but aware of logging keyworkds (INFO/DEBUG/etc)
|
# TODO also would be cool to have a terminal post-processor? kinda like tail but aware of logging keywords (INFO/DEBUG/etc)
|
||||||
self.stream.write(msg + ' ' * max(0, columns - len(msg)) + ('' if cur else '\n'))
|
self.stream.write(msg + ' ' * max(0, columns - len(msg)) + ('' if cur else '\n'))
|
||||||
self.flush()
|
self.flush()
|
||||||
except:
|
except:
|
||||||
|
|
|
@ -74,7 +74,7 @@ No 'error' column detected. You probably forgot to handle errors defensively, wh
|
||||||
from typing import Any, Callable, TypeVar
|
from typing import Any, Callable, TypeVar
|
||||||
FuncT = TypeVar('FuncT', bound=Callable[..., DataFrameT])
|
FuncT = TypeVar('FuncT', bound=Callable[..., DataFrameT])
|
||||||
|
|
||||||
# TODO ugh. typing this is a mess... shoul I use mypy_extensions.VarArg/KwArgs?? or what??
|
# TODO ugh. typing this is a mess... should I use mypy_extensions.VarArg/KwArgs?? or what??
|
||||||
from decorator import decorator
|
from decorator import decorator
|
||||||
@decorator
|
@decorator
|
||||||
def check_dataframe(f: FuncT, error_col_policy: ErrorColPolicy='add_if_missing', *args, **kwargs) -> DataFrameT:
|
def check_dataframe(f: FuncT, error_col_policy: ErrorColPolicy='add_if_missing', *args, **kwargs) -> DataFrameT:
|
||||||
|
|
|
@ -26,7 +26,7 @@ ET = Res[T]
|
||||||
U = TypeVar("U")
|
U = TypeVar("U")
|
||||||
# In a perfect world, the return value from a OrderFunc would just be U,
|
# In a perfect world, the return value from a OrderFunc would just be U,
|
||||||
# not Optional[U]. However, since this has to deal with so many edge
|
# not Optional[U]. However, since this has to deal with so many edge
|
||||||
# cases, theres a possibility that the functions generated by
|
# cases, there's a possibility that the functions generated by
|
||||||
# _generate_order_by_func can't find an attribute
|
# _generate_order_by_func can't find an attribute
|
||||||
OrderFunc = Callable[[ET], Optional[U]]
|
OrderFunc = Callable[[ET], Optional[U]]
|
||||||
Where = Callable[[ET], bool]
|
Where = Callable[[ET], bool]
|
||||||
|
@ -54,7 +54,7 @@ def locate_function(module_name: str, function_name: str) -> Callable[[], Iterab
|
||||||
for (fname, func) in inspect.getmembers(mod, inspect.isfunction):
|
for (fname, func) in inspect.getmembers(mod, inspect.isfunction):
|
||||||
if fname == function_name:
|
if fname == function_name:
|
||||||
return func
|
return func
|
||||||
# incase the function is defined dynamically,
|
# in case the function is defined dynamically,
|
||||||
# like with a globals().setdefault(...) or a module-level __getattr__ function
|
# like with a globals().setdefault(...) or a module-level __getattr__ function
|
||||||
func = getattr(mod, function_name, None)
|
func = getattr(mod, function_name, None)
|
||||||
if func is not None and callable(func):
|
if func is not None and callable(func):
|
||||||
|
@ -244,7 +244,7 @@ def _drop_unsorted(itr: Iterator[ET], orderfunc: OrderFunc) -> Iterator[ET]:
|
||||||
|
|
||||||
|
|
||||||
# try getting the first value from the iterator
|
# try getting the first value from the iterator
|
||||||
# similar to my.core.common.warn_if_empty? this doesnt go through the whole iterator though
|
# similar to my.core.common.warn_if_empty? this doesn't go through the whole iterator though
|
||||||
def _peek_iter(itr: Iterator[ET]) -> Tuple[Optional[ET], Iterator[ET]]:
|
def _peek_iter(itr: Iterator[ET]) -> Tuple[Optional[ET], Iterator[ET]]:
|
||||||
itr = more_itertools.peekable(itr)
|
itr = more_itertools.peekable(itr)
|
||||||
try:
|
try:
|
||||||
|
@ -290,7 +290,7 @@ def _handle_unsorted(
|
||||||
return iter([]), itr
|
return iter([]), itr
|
||||||
|
|
||||||
|
|
||||||
# handles creating an order_value functon, using a lookup for
|
# handles creating an order_value function, using a lookup for
|
||||||
# different types. ***This consumes the iterator***, so
|
# different types. ***This consumes the iterator***, so
|
||||||
# you should definitely itertoolts.tee it beforehand
|
# you should definitely itertoolts.tee it beforehand
|
||||||
# as to not exhaust the values
|
# as to not exhaust the values
|
||||||
|
@ -374,7 +374,7 @@ def select(
|
||||||
by allowing you to provide custom predicates (functions) which can sort
|
by allowing you to provide custom predicates (functions) which can sort
|
||||||
by a function, an attribute, dict key, or by the attributes values.
|
by a function, an attribute, dict key, or by the attributes values.
|
||||||
|
|
||||||
Since this supports mixed types, theres always a possibility
|
Since this supports mixed types, there's always a possibility
|
||||||
of KeyErrors or AttributeErrors while trying to find some value to order by,
|
of KeyErrors or AttributeErrors while trying to find some value to order by,
|
||||||
so this provides multiple mechanisms to deal with that
|
so this provides multiple mechanisms to deal with that
|
||||||
|
|
||||||
|
|
|
@ -220,7 +220,7 @@ def _create_range_filter(
|
||||||
# inclusivity here? Is [after, before) currently,
|
# inclusivity here? Is [after, before) currently,
|
||||||
# items are included on the lower bound but not the
|
# items are included on the lower bound but not the
|
||||||
# upper bound
|
# upper bound
|
||||||
# typically used for datetimes so doesnt have to
|
# typically used for datetimes so doesn't have to
|
||||||
# be exact in that case
|
# be exact in that case
|
||||||
def generated_predicate(obj: Any) -> bool:
|
def generated_predicate(obj: Any) -> bool:
|
||||||
ov: Any = attr_func(obj)
|
ov: Any = attr_func(obj)
|
||||||
|
@ -294,7 +294,7 @@ def select_range(
|
||||||
|
|
||||||
# some operations to do before ordering/filtering
|
# some operations to do before ordering/filtering
|
||||||
if drop_exceptions or raise_exceptions or where is not None:
|
if drop_exceptions or raise_exceptions or where is not None:
|
||||||
# doesnt wrap unsortable items, because we pass no order related kwargs
|
# doesn't wrap unsortable items, because we pass no order related kwargs
|
||||||
itr = select(itr, where=where, drop_exceptions=drop_exceptions, raise_exceptions=raise_exceptions)
|
itr = select(itr, where=where, drop_exceptions=drop_exceptions, raise_exceptions=raise_exceptions)
|
||||||
|
|
||||||
order_by_chosen: Optional[OrderFunc] = None
|
order_by_chosen: Optional[OrderFunc] = None
|
||||||
|
@ -356,7 +356,7 @@ Specify a type or a key to order the value by""")
|
||||||
#
|
#
|
||||||
# this select is also run if the user didn't specify anything to
|
# this select is also run if the user didn't specify anything to
|
||||||
# order by, and is just returning the data in the same order as
|
# order by, and is just returning the data in the same order as
|
||||||
# as the srouce iterable
|
# as the source iterable
|
||||||
# i.e. none of the range-related filtering code ran, this is just a select
|
# i.e. none of the range-related filtering code ran, this is just a select
|
||||||
itr = select(itr,
|
itr = select(itr,
|
||||||
order_by=order_by_chosen,
|
order_by=order_by_chosen,
|
||||||
|
@ -483,7 +483,7 @@ def test_parse_range() -> None:
|
||||||
|
|
||||||
assert res2 == RangeTuple(after=start_date.timestamp(), before=end_date.timestamp(), within=None)
|
assert res2 == RangeTuple(after=start_date.timestamp(), before=end_date.timestamp(), within=None)
|
||||||
|
|
||||||
# cant specify all three
|
# can't specify all three
|
||||||
with pytest.raises(QueryException, match=r"Cannot specify 'after', 'before' and 'within'"):
|
with pytest.raises(QueryException, match=r"Cannot specify 'after', 'before' and 'within'"):
|
||||||
dt_parse_range(unparsed_range=RangeTuple(str(start_date), str(end_date.timestamp()), "7d"))
|
dt_parse_range(unparsed_range=RangeTuple(str(start_date), str(end_date.timestamp()), "7d"))
|
||||||
|
|
||||||
|
|
|
@ -96,7 +96,7 @@ def _dumps_factory(**kwargs) -> Callable[[Any], str]:
|
||||||
# is rust-based and compiling on rarer architectures may not work
|
# is rust-based and compiling on rarer architectures may not work
|
||||||
# out of the box
|
# out of the box
|
||||||
#
|
#
|
||||||
# unlike the builtin JSON modue which serializes NamedTuples as lists
|
# unlike the builtin JSON module which serializes NamedTuples as lists
|
||||||
# (even if you provide a default function), simplejson correctly
|
# (even if you provide a default function), simplejson correctly
|
||||||
# serializes namedtuples to dictionaries
|
# serializes namedtuples to dictionaries
|
||||||
|
|
||||||
|
@ -157,7 +157,7 @@ def dumps(
|
||||||
def test_serialize_fallback() -> None:
|
def test_serialize_fallback() -> None:
|
||||||
import json as jsn # dont cause possible conflicts with module code
|
import json as jsn # dont cause possible conflicts with module code
|
||||||
|
|
||||||
# cant use a namedtuple here, since the default json.dump serializer
|
# can't use a namedtuple here, since the default json.dump serializer
|
||||||
# serializes namedtuples as tuples, which become arrays
|
# serializes namedtuples as tuples, which become arrays
|
||||||
# just test with an array of mixed objects
|
# just test with an array of mixed objects
|
||||||
X = [5, datetime.timedelta(seconds=5.0)]
|
X = [5, datetime.timedelta(seconds=5.0)]
|
||||||
|
@ -216,7 +216,7 @@ def test_default_serializer() -> None:
|
||||||
def _serialize_with_default(o: Any) -> Any:
|
def _serialize_with_default(o: Any) -> Any:
|
||||||
if isinstance(o, Unserializable):
|
if isinstance(o, Unserializable):
|
||||||
return {"x": o.x, "y": o.y}
|
return {"x": o.x, "y": o.y}
|
||||||
raise TypeError("Couldnt serialize")
|
raise TypeError("Couldn't serialize")
|
||||||
|
|
||||||
# this serializes both Unserializable, which is a custom type otherwise
|
# this serializes both Unserializable, which is a custom type otherwise
|
||||||
# not handled, and timedelta, which is handled by the '_default_encode'
|
# not handled, and timedelta, which is handled by the '_default_encode'
|
||||||
|
|
|
@ -94,7 +94,7 @@ def sqlite_copy_and_open(db: PathIsh) -> sqlite3.Connection:
|
||||||
|
|
||||||
# NOTE hmm, so this kinda works
|
# NOTE hmm, so this kinda works
|
||||||
# V = TypeVar('V', bound=Tuple[Any, ...])
|
# V = TypeVar('V', bound=Tuple[Any, ...])
|
||||||
# def select(cols: V, rest: str, *, db: sqlite3.Connetion) -> Iterator[V]:
|
# def select(cols: V, rest: str, *, db: sqlite3.Connection) -> Iterator[V]:
|
||||||
# but sadly when we pass columns (Tuple[str, ...]), it seems to bind this type to V?
|
# but sadly when we pass columns (Tuple[str, ...]), it seems to bind this type to V?
|
||||||
# and then the return type ends up as Iterator[Tuple[str, ...]], which isn't desirable :(
|
# and then the return type ends up as Iterator[Tuple[str, ...]], which isn't desirable :(
|
||||||
# a bit annoying to have this copy-pasting, but hopefully not a big issue
|
# a bit annoying to have this copy-pasting, but hopefully not a big issue
|
||||||
|
|
|
@ -35,7 +35,7 @@ def is_data_provider(fun: Any) -> bool:
|
||||||
1. returns iterable or something like that
|
1. returns iterable or something like that
|
||||||
2. takes no arguments? (otherwise not callable by stats anyway?)
|
2. takes no arguments? (otherwise not callable by stats anyway?)
|
||||||
3. doesn't start with an underscore (those are probably helper functions?)
|
3. doesn't start with an underscore (those are probably helper functions?)
|
||||||
4. functions isnt the 'inputs' function (or ends with '_inputs')
|
4. functions isn't the 'inputs' function (or ends with '_inputs')
|
||||||
"""
|
"""
|
||||||
# todo maybe for 2 allow default arguments? not sure
|
# todo maybe for 2 allow default arguments? not sure
|
||||||
# one example which could benefit is my.pdfs
|
# one example which could benefit is my.pdfs
|
||||||
|
|
|
@ -246,7 +246,7 @@ def stats():
|
||||||
sys.path = orig_path
|
sys.path = orig_path
|
||||||
# shouldn't crash at least
|
# shouldn't crash at least
|
||||||
assert res is None # good as far as discovery is concerned
|
assert res is None # good as far as discovery is concerned
|
||||||
assert xx.read_text() == 'some precious data' # make sure module wasn't evauluated
|
assert xx.read_text() == 'some precious data' # make sure module wasn't evaluated
|
||||||
|
|
||||||
|
|
||||||
### tests end
|
### tests end
|
||||||
|
|
|
@ -46,7 +46,7 @@ from .core import Json, get_files
|
||||||
@dataclass
|
@dataclass
|
||||||
class Item:
|
class Item:
|
||||||
'''
|
'''
|
||||||
Some completely arbirary artificial stuff, just for testing
|
Some completely arbitrary artificial stuff, just for testing
|
||||||
'''
|
'''
|
||||||
username: str
|
username: str
|
||||||
raw: Json
|
raw: Json
|
||||||
|
|
|
@ -38,7 +38,7 @@ def datas() -> Iterable[Res[Emfit]]:
|
||||||
import dataclasses
|
import dataclasses
|
||||||
|
|
||||||
# data from emfit is coming in UTC. There is no way (I think?) to know the 'real' timezone, and local times matter more for sleep analysis
|
# data from emfit is coming in UTC. There is no way (I think?) to know the 'real' timezone, and local times matter more for sleep analysis
|
||||||
# TODO actully this is wrong?? check this..
|
# TODO actually this is wrong?? check this..
|
||||||
emfit_tz = config.timezone
|
emfit_tz = config.timezone
|
||||||
|
|
||||||
for x in dal.sleeps(config.export_path):
|
for x in dal.sleeps(config.export_path):
|
||||||
|
|
|
@ -177,7 +177,7 @@ def messages() -> Iterator[Res[Message]]:
|
||||||
reply_to_id = x.reply_to_id
|
reply_to_id = x.reply_to_id
|
||||||
# hmm, reply_to be missing due to the synthetic nature of export, so have to be defensive
|
# hmm, reply_to be missing due to the synthetic nature of export, so have to be defensive
|
||||||
reply_to = None if reply_to_id is None else msgs.get(reply_to_id)
|
reply_to = None if reply_to_id is None else msgs.get(reply_to_id)
|
||||||
# also would be interesting to merge together entities rather than resuling messages from different sources..
|
# also would be interesting to merge together entities rather than resulting messages from different sources..
|
||||||
# then the merging thing could be moved to common?
|
# then the merging thing could be moved to common?
|
||||||
try:
|
try:
|
||||||
sender = senders[x.sender_id]
|
sender = senders[x.sender_id]
|
||||||
|
|
|
@ -128,7 +128,7 @@ def _get_summary(e) -> Tuple[str, Optional[Link], Optional[EventId], Optional[Bo
|
||||||
rt = pl['ref_type']
|
rt = pl['ref_type']
|
||||||
ref = pl['ref']
|
ref = pl['ref']
|
||||||
if what == 'created':
|
if what == 'created':
|
||||||
# FIXME should handle delection?...
|
# FIXME should handle deletion?...
|
||||||
eid = EventIds.repo_created(dts=dts, name=rname, ref_type=rt, ref=ref)
|
eid = EventIds.repo_created(dts=dts, name=rname, ref_type=rt, ref=ref)
|
||||||
mref = '' if ref is None else ' ' + ref
|
mref = '' if ref is None else ' ' + ref
|
||||||
# todo link to branch? only contains weird API link though
|
# todo link to branch? only contains weird API link though
|
||||||
|
|
|
@ -58,7 +58,7 @@ def items() -> Iterator[Res[Item]]:
|
||||||
type=r['type'],
|
type=r['type'],
|
||||||
created=datetime.fromtimestamp(r['time']),
|
created=datetime.fromtimestamp(r['time']),
|
||||||
title=r['title'],
|
title=r['title'],
|
||||||
# todo hmm maybe a method to stip off html tags would be nice
|
# todo hmm maybe a method to strip off html tags would be nice
|
||||||
text_html=r['text'],
|
text_html=r['text'],
|
||||||
url=r['url'],
|
url=r['url'],
|
||||||
)
|
)
|
||||||
|
|
|
@ -71,7 +71,7 @@ class _Message(_BaseMessage):
|
||||||
@dataclass(unsafe_hash=True)
|
@dataclass(unsafe_hash=True)
|
||||||
class Message(_BaseMessage):
|
class Message(_BaseMessage):
|
||||||
user: User
|
user: User
|
||||||
# TODO could also extract Thread objec? not sure if useful
|
# TODO could also extract Thread object? not sure if useful
|
||||||
# reply_to: Optional[Message]
|
# reply_to: Optional[Message]
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -242,7 +242,7 @@ def plot_one(sleep: SleepEntry, fig: Figure, axes: Axes, xlims=None, showtext=Tr
|
||||||
|
|
||||||
def predicate(sleep: SleepEntry):
|
def predicate(sleep: SleepEntry):
|
||||||
"""
|
"""
|
||||||
Filter for comparing similar sleep sesssions
|
Filter for comparing similar sleep sessions
|
||||||
"""
|
"""
|
||||||
start = sleep.created.time()
|
start = sleep.created.time()
|
||||||
end = sleep.completed.time()
|
end = sleep.completed.time()
|
||||||
|
|
|
@ -64,7 +64,7 @@ class FallbackLocation(LocationProtocol):
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
# a location estimator can return multiple fallbacks, incase there are
|
# a location estimator can return multiple fallbacks, in case there are
|
||||||
# differing accuracies/to allow for possible matches to be computed
|
# differing accuracies/to allow for possible matches to be computed
|
||||||
# iteratively
|
# iteratively
|
||||||
LocationEstimator = Callable[[DateExact], Iterator[FallbackLocation]]
|
LocationEstimator = Callable[[DateExact], Iterator[FallbackLocation]]
|
||||||
|
|
|
@ -50,7 +50,7 @@ def fallback_locations() -> Iterator[FallbackLocation]:
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
# for compatibility with my.location.via_ip, this shouldnt be used by other modules
|
# for compatibility with my.location.via_ip, this shouldn't be used by other modules
|
||||||
def locations() -> Iterator[Location]:
|
def locations() -> Iterator[Location]:
|
||||||
medium("locations is deprecated, should use fallback_locations or estimate_location")
|
medium("locations is deprecated, should use fallback_locations or estimate_location")
|
||||||
yield from map(FallbackLocation.to_location, fallback_locations())
|
yield from map(FallbackLocation.to_location, fallback_locations())
|
||||||
|
|
|
@ -82,7 +82,7 @@ def _iter_via_grep(fo) -> Iterable[TsLatLon]:
|
||||||
|
|
||||||
|
|
||||||
# todo could also use pool? not sure if that would really be faster...
|
# todo could also use pool? not sure if that would really be faster...
|
||||||
# earch thread could process 100K at once?
|
# search thread could process 100K at once?
|
||||||
# would need to find out a way to know when to stop? process in some sort of sqrt progression??
|
# would need to find out a way to know when to stop? process in some sort of sqrt progression??
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -79,7 +79,7 @@ class Annotation(NamedTuple):
|
||||||
def _as_annotation(*, raw: pdfannots.Annotation, path: str) -> Annotation:
|
def _as_annotation(*, raw: pdfannots.Annotation, path: str) -> Annotation:
|
||||||
d = vars(raw)
|
d = vars(raw)
|
||||||
pos = raw.pos
|
pos = raw.pos
|
||||||
# make mypy happy (pos alwasy present for Annotation https://github.com/0xabu/pdfannots/blob/dbdfefa158971e1746fae2da139918e9f59439ea/pdfannots/types.py#L302)
|
# make mypy happy (pos always present for Annotation https://github.com/0xabu/pdfannots/blob/dbdfefa158971e1746fae2da139918e9f59439ea/pdfannots/types.py#L302)
|
||||||
assert pos is not None
|
assert pos is not None
|
||||||
d['page'] = pos.page.pageno
|
d['page'] = pos.page.pageno
|
||||||
return Annotation(
|
return Annotation(
|
||||||
|
|
|
@ -43,7 +43,7 @@ class Photo(NamedTuple):
|
||||||
if self.path.startswith(bp):
|
if self.path.startswith(bp):
|
||||||
return self.path[len(bp):]
|
return self.path[len(bp):]
|
||||||
else:
|
else:
|
||||||
raise RuntimeError(f'Weird path {self.path}, cant match against anything')
|
raise RuntimeError(f"Weird path {self.path}, can't match against anything")
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def name(self) -> str:
|
def name(self) -> str:
|
||||||
|
|
|
@ -48,7 +48,7 @@ def _get_exif_data(image) -> Exif:
|
||||||
|
|
||||||
def to_degree(value) -> float:
|
def to_degree(value) -> float:
|
||||||
"""Helper function to convert the GPS coordinates
|
"""Helper function to convert the GPS coordinates
|
||||||
stored in the EXIF to degress in float format"""
|
stored in the EXIF to digress in float format"""
|
||||||
(d, m, s) = value
|
(d, m, s) = value
|
||||||
return d + (m / 60.0) + (s / 3600.0)
|
return d + (m / 60.0) + (s / 3600.0)
|
||||||
|
|
||||||
|
@ -65,7 +65,7 @@ from datetime import datetime
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
# TODO surely there is a library that does it??
|
# TODO surely there is a library that does it??
|
||||||
# TODO this belogs to a private overlay or something
|
# TODO this belongs to a private overlay or something
|
||||||
# basically have a function that patches up dates after the files were yielded..
|
# basically have a function that patches up dates after the files were yielded..
|
||||||
_DT_REGEX = re.compile(r'\D(\d{8})\D*(\d{6})\D')
|
_DT_REGEX = re.compile(r'\D(\d{8})\D*(\d{6})\D')
|
||||||
def dt_from_path(p: Path) -> Optional[datetime]:
|
def dt_from_path(p: Path) -> Optional[datetime]:
|
||||||
|
|
|
@ -197,7 +197,7 @@ def _get_events(backups: Sequence[Path], parallel: bool=True) -> Iterator[Event]
|
||||||
# eh. I guess just take max and it will always be correct?
|
# eh. I guess just take max and it will always be correct?
|
||||||
assert not first
|
assert not first
|
||||||
yield Event(
|
yield Event(
|
||||||
dt=bdt, # TODO average wit ps.save_dt?
|
dt=bdt, # TODO average with ps.save_dt?
|
||||||
text="unfavorited",
|
text="unfavorited",
|
||||||
kind=ps,
|
kind=ps,
|
||||||
eid=f'unf-{ps.sid}',
|
eid=f'unf-{ps.sid}',
|
||||||
|
|
|
@ -39,7 +39,7 @@ class Entry(NamedTuple):
|
||||||
def timestamp(self) -> datetime:
|
def timestamp(self) -> datetime:
|
||||||
ts = self.row['timestamp']
|
ts = self.row['timestamp']
|
||||||
# already with timezone apparently
|
# already with timezone apparently
|
||||||
# TODO not sure if should stil localize though? it only kept tz offset, not real tz
|
# TODO not sure if should still localize though? it only kept tz offset, not real tz
|
||||||
return datetime.fromisoformat(ts)
|
return datetime.fromisoformat(ts)
|
||||||
# TODO also has gps info!
|
# TODO also has gps info!
|
||||||
|
|
||||||
|
|
|
@ -35,7 +35,7 @@ class config(user_config):
|
||||||
fast: bool = True
|
fast: bool = True
|
||||||
|
|
||||||
# sort locations by date
|
# sort locations by date
|
||||||
# incase multiple sources provide them out of order
|
# in case multiple sources provide them out of order
|
||||||
sort_locations: bool = True
|
sort_locations: bool = True
|
||||||
|
|
||||||
# if the accuracy for the location is more than 5km, don't use
|
# if the accuracy for the location is more than 5km, don't use
|
||||||
|
@ -94,7 +94,7 @@ def _locations() -> Iterator[Tuple[LatLon, datetime]]:
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
from my.core.warnings import high
|
from my.core.warnings import high
|
||||||
logger.exception("Could not setup via_location using my.location.all provider, falling back to legacy google implemetation", exc_info=e)
|
logger.exception("Could not setup via_location using my.location.all provider, falling back to legacy google implementation", exc_info=e)
|
||||||
high("Setup my.google.takeout.parser, then my.location.all for better google takeout/location data")
|
high("Setup my.google.takeout.parser, then my.location.all for better google takeout/location data")
|
||||||
|
|
||||||
import my.location.google
|
import my.location.google
|
||||||
|
@ -134,7 +134,7 @@ def _find_tz_for_locs(finder: Any, locs: Iterable[Tuple[LatLon, datetime]]) -> I
|
||||||
def _iter_local_dates() -> Iterator[DayWithZone]:
|
def _iter_local_dates() -> Iterator[DayWithZone]:
|
||||||
finder = _timezone_finder(fast=config.fast) # rely on the default
|
finder = _timezone_finder(fast=config.fast) # rely on the default
|
||||||
#pdt = None
|
#pdt = None
|
||||||
# TODO: warnings doesnt actually warn?
|
# TODO: warnings doesn't actually warn?
|
||||||
# warnings = []
|
# warnings = []
|
||||||
|
|
||||||
locs: Iterable[Tuple[LatLon, datetime]]
|
locs: Iterable[Tuple[LatLon, datetime]]
|
||||||
|
|
|
@ -102,7 +102,7 @@ def _handle_db(db: sqlite3.Connection) -> Iterator[Res[_Entity]]:
|
||||||
try:
|
try:
|
||||||
yield _parse_person(row)
|
yield _parse_person(row)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# todo attach error contex?
|
# todo attach error context?
|
||||||
yield e
|
yield e
|
||||||
|
|
||||||
for row in db.execute('SELECT * FROM match'):
|
for row in db.execute('SELECT * FROM match'):
|
||||||
|
|
|
@ -68,7 +68,7 @@ def watched() -> Iterable[Res[Watched]]:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if title.startswith('Subscribed to') and url.startswith('https://www.youtube.com/channel/'):
|
if title.startswith('Subscribed to') and url.startswith('https://www.youtube.com/channel/'):
|
||||||
# todo might be interesting to process somwhere?
|
# todo might be interesting to process somewhere?
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# all titles contain it, so pointless to include 'Watched '
|
# all titles contain it, so pointless to include 'Watched '
|
||||||
|
|
|
@ -32,7 +32,7 @@ def test() -> None:
|
||||||
|
|
||||||
assert len(tp) == 1 # should be unique
|
assert len(tp) == 1 # should be unique
|
||||||
|
|
||||||
# 2.5 K + 4 K datapoints, somwhat overlapping
|
# 2.5 K + 4 K datapoints, somewhat overlapping
|
||||||
assert len(res2020) < 6000
|
assert len(res2020) < 6000
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -8,7 +8,7 @@ def test_dynamic_configuration(notes: Path) -> None:
|
||||||
from my.core.cfg import tmp_config
|
from my.core.cfg import tmp_config
|
||||||
with tmp_config() as C:
|
with tmp_config() as C:
|
||||||
C.orgmode = NS(paths=[notes])
|
C.orgmode = NS(paths=[notes])
|
||||||
# TODO ugh. this belongs to tz provider or global config or someting
|
# TODO ugh. this belongs to tz provider or global config or something
|
||||||
C.weight = NS(default_timezone=pytz.timezone('Europe/London'))
|
C.weight = NS(default_timezone=pytz.timezone('Europe/London'))
|
||||||
|
|
||||||
from my.body.weight import from_orgmode
|
from my.body.weight import from_orgmode
|
||||||
|
|
|
@ -72,7 +72,7 @@ def test_denylist(tmp_path: Path) -> None:
|
||||||
d.deny(key="dt", value=datetime(2020, 2, 1))
|
d.deny(key="dt", value=datetime(2020, 2, 1))
|
||||||
|
|
||||||
# test internal behavior, _deny_raw_list should have been updated,
|
# test internal behavior, _deny_raw_list should have been updated,
|
||||||
# but _deny_map doesnt get updated by a call to .deny
|
# but _deny_map doesn't get updated by a call to .deny
|
||||||
#
|
#
|
||||||
# if we change this just update the test, is just here to ensure
|
# if we change this just update the test, is just here to ensure
|
||||||
# this is the behaviour
|
# this is the behaviour
|
||||||
|
|
|
@ -98,7 +98,7 @@ def test_zippath() -> None:
|
||||||
], rpaths
|
], rpaths
|
||||||
|
|
||||||
|
|
||||||
# TODO hmm this doesn't work atm, wheras Path does
|
# TODO hmm this doesn't work atm, whereas Path does
|
||||||
# not sure if it should be defensive or something...
|
# not sure if it should be defensive or something...
|
||||||
# ZipPath('doesnotexist')
|
# ZipPath('doesnotexist')
|
||||||
# same for this one
|
# same for this one
|
||||||
|
|
|
@ -19,7 +19,7 @@ def test_dynamic_config_1(tmp_path: Path) -> None:
|
||||||
assert item1.username == 'user'
|
assert item1.username == 'user'
|
||||||
|
|
||||||
|
|
||||||
# exactly the same test, but using a different config, to test out the behavious w.r.t. import order
|
# exactly the same test, but using a different config, to test out the behaviour w.r.t. import order
|
||||||
def test_dynamic_config_2(tmp_path: Path) -> None:
|
def test_dynamic_config_2(tmp_path: Path) -> None:
|
||||||
# doesn't work without it!
|
# doesn't work without it!
|
||||||
# because the config from test_dybamic_config_1 is cached in my.demo.demo
|
# because the config from test_dybamic_config_1 is cached in my.demo.demo
|
||||||
|
|
|
@ -38,7 +38,7 @@ PARAMS = [
|
||||||
def prepare(request):
|
def prepare(request):
|
||||||
dotpolar = request.param
|
dotpolar = request.param
|
||||||
class user_config:
|
class user_config:
|
||||||
if dotpolar != '': # defaul
|
if dotpolar != '': # default
|
||||||
polar_dir = Path(ROOT / dotpolar)
|
polar_dir = Path(ROOT / dotpolar)
|
||||||
defensive = False
|
defensive = False
|
||||||
|
|
||||||
|
|
|
@ -8,7 +8,7 @@ from .common import testdata
|
||||||
|
|
||||||
|
|
||||||
def test_module(with_config) -> None:
|
def test_module(with_config) -> None:
|
||||||
# TODO crap. if module is imported too early (on the top level, it makes it super hard to overrride config)
|
# TODO crap. if module is imported too early (on the top level, it makes it super hard to override config)
|
||||||
# need to at least detect it...
|
# need to at least detect it...
|
||||||
from my.pdfs import annotations, annotated_pdfs
|
from my.pdfs import annotations, annotated_pdfs
|
||||||
|
|
||||||
|
|
|
@ -52,7 +52,7 @@ def test_tz() -> None:
|
||||||
tz = LTZ._get_tz(datetime.min)
|
tz = LTZ._get_tz(datetime.min)
|
||||||
assert tz is not None
|
assert tz is not None
|
||||||
else:
|
else:
|
||||||
# seems this fails because windows doesnt support same date ranges
|
# seems this fails because windows doesn't support same date ranges
|
||||||
# https://stackoverflow.com/a/41400321/
|
# https://stackoverflow.com/a/41400321/
|
||||||
with pytest.raises(OSError):
|
with pytest.raises(OSError):
|
||||||
LTZ._get_tz(datetime.min)
|
LTZ._get_tz(datetime.min)
|
||||||
|
|
Loading…
Add table
Reference in a new issue