Compare commits

..

No commits in common. "master" and "v0.5.20241019" have entirely different histories.

178 changed files with 1234 additions and 1583 deletions

3
.gitignore vendored
View file

@ -155,9 +155,6 @@ celerybeat-schedule
.dmypy.json
dmypy.json
# linters
.ruff_cache/
# Pyre type checker
.pyre/

View file

@ -20,7 +20,7 @@ General/my.core changes:
- e81dddddf083ffd81aa7e2b715bd34f59949479c properly resolve class properties in make_config + add test
Modules:
- some initial work on filling **InfluxDB** with HPI data
- some innitial work on filling **InfluxDB** with HPI data
- pinboard
- 42399f6250d9901d93dcedcfe05f7857babcf834: **breaking backwards compatibility**, use pinbexport module directly

View file

@ -723,10 +723,10 @@ If you want to write modules for personal use but don't want to merge them into
Other HPI Repositories:
- [[https://github.com/purarue/HPI][purarue/HPI]]
- [[https://github.com/seanbreckenridge/HPI][seanbreckenridge/HPI]]
- [[https://github.com/madelinecameron/hpi][madelinecameron/HPI]]
If you want to create your own to create your own modules/override something here, you can use the [[https://github.com/purarue/HPI-template][template]].
If you want to create your own to create your own modules/override something here, you can use the [[https://github.com/seanbreckenridge/HPI-template][template]].
* Related links
:PROPERTIES:

View file

@ -76,7 +76,7 @@ This would typically be used in an overridden `all.py` file, or in a one-off scr
which you may want to filter out some items from a source, progressively adding more
items to the denylist as you go.
A potential `my/ip/all.py` file might look like (Sidenote: `discord` module from [here](https://github.com/purarue/HPI)):
A potential `my/ip/all.py` file might look like (Sidenote: `discord` module from [here](https://github.com/seanbreckenridge/HPI)):
```python
from typing import Iterator
@ -119,9 +119,9 @@ python3 -c 'from my.ip import all; all.deny.deny_cli(all.ips())'
To edit the `all.py`, you could either:
- install it as editable (`python3 -m pip install --user -e ./HPI`), and then edit the file directly
- or, create a namespace package, which splits the package across multiple directories. For info on that see [`MODULE_DESIGN`](https://github.com/karlicoss/HPI/blob/master/doc/MODULE_DESIGN.org#namespace-packages), [`reorder_editable`](https://github.com/purarue/reorder_editable), and possibly the [`HPI-template`](https://github.com/purarue/HPI-template) to create your own HPI namespace package to create your own `all.py` file.
- or, create a namespace package, which splits the package across multiple directories. For info on that see [`MODULE_DESIGN`](https://github.com/karlicoss/HPI/blob/master/doc/MODULE_DESIGN.org#namespace-packages), [`reorder_editable`](https://github.com/seanbreckenridge/reorder_editable), and possibly the [`HPI-template`](https://github.com/seanbreckenridge/HPI-template) to create your own HPI namespace package to create your own `all.py` file.
For a real example of this see, [purarue/HPI-personal](https://github.com/purarue/HPI-personal/blob/master/my/ip/all.py)
For a real example of this see, [seanbreckenridge/HPI-personal](https://github.com/seanbreckenridge/HPI-personal/blob/master/my/ip/all.py)
Sidenote: the reason why we want to specifically override
the all.py and not just create a script that filters out the items you're

View file

@ -76,7 +76,7 @@ The config snippets below are meant to be modified accordingly and *pasted into
You don't have to set up all modules at once, it's recommended to do it gradually, to get the feel of how HPI works.
For an extensive/complex example, you can check out ~@purarue~'s [[https://github.com/purarue/dotfiles/blob/master/.config/my/my/config/__init__.py][config]]
For an extensive/complex example, you can check out ~@seanbreckenridge~'s [[https://github.com/seanbreckenridge/dotfiles/blob/master/.config/my/my/config/__init__.py][config]]
# Nested Configurations before the doc generation using the block below
** [[file:../my/reddit][my.reddit]]
@ -96,7 +96,7 @@ For an extensive/complex example, you can check out ~@purarue~'s [[https://githu
class pushshift:
'''
Uses [[https://github.com/purarue/pushshift_comment_export][pushshift]] to get access to old comments
Uses [[https://github.com/seanbreckenridge/pushshift_comment_export][pushshift]] to get access to old comments
'''
# path[s]/glob to the exported JSON data
@ -106,7 +106,7 @@ For an extensive/complex example, you can check out ~@purarue~'s [[https://githu
** [[file:../my/browser/][my.browser]]
Parses browser history using [[http://github.com/purarue/browserexport][browserexport]]
Parses browser history using [[http://github.com/seanbreckenridge/browserexport][browserexport]]
#+begin_src python
class browser:
@ -132,7 +132,7 @@ For an extensive/complex example, you can check out ~@purarue~'s [[https://githu
You might also be able to use [[file:../my/location/via_ip.py][my.location.via_ip]] which uses =my.ip.all= to
provide geolocation data for an IPs (though no IPs are provided from any
of the sources here). For an example of usage, see [[https://github.com/purarue/HPI/tree/master/my/ip][here]]
of the sources here). For an example of usage, see [[https://github.com/seanbreckenridge/HPI/tree/master/my/ip][here]]
#+begin_src python
class location:
@ -256,9 +256,9 @@ for cls, p in modules:
** [[file:../my/google/takeout/parser.py][my.google.takeout.parser]]
Parses Google Takeout using [[https://github.com/purarue/google_takeout_parser][google_takeout_parser]]
Parses Google Takeout using [[https://github.com/seanbreckenridge/google_takeout_parser][google_takeout_parser]]
See [[https://github.com/purarue/google_takeout_parser][google_takeout_parser]] for more information about how to export and organize your takeouts
See [[https://github.com/seanbreckenridge/google_takeout_parser][google_takeout_parser]] for more information about how to export and organize your takeouts
If the =DISABLE_TAKEOUT_CACHE= environment variable is set, this won't
cache individual exports in =~/.cache/google_takeout_parser=

View file

@ -67,7 +67,7 @@ If you want to disable a source, you have a few options.
... that suppresses the warning message and lets you use ~my.location.all~ without having to change any lines of code
Another benefit is that all the custom sources/data is localized to the ~all.py~ file, so a user can override the ~all.py~ (see the sections below on ~namespace packages~) file in their own HPI repository, adding additional sources without having to maintain a fork and patching in changes as things eventually change. For a 'real world' example of that, see [[https://github.com/purarue/HPI#partially-in-usewith-overrides][purarue]]s location and ip modules.
Another benefit is that all the custom sources/data is localized to the ~all.py~ file, so a user can override the ~all.py~ (see the sections below on ~namespace packages~) file in their own HPI repository, adding additional sources without having to maintain a fork and patching in changes as things eventually change. For a 'real world' example of that, see [[https://github.com/seanbreckenridge/HPI#partially-in-usewith-overrides][seanbreckenridge]]s location and ip modules.
This is of course not required for personal or single file modules, its just the pattern that seems to have the least amount of friction for the user, while being extendable, and without using a bulky plugin system to let users add additional sources.
@ -208,13 +208,13 @@ Where ~lastfm.py~ is your version of ~my.lastfm~, which you've copied from this
Then, running ~python3 -m pip install -e .~ in that directory would install that as part of the namespace package, and assuming (see below for possible issues) this appears on ~sys.path~ before the upstream repository, your ~lastfm.py~ file overrides the upstream. Adding more files, like ~my.some_new_module~ into that directory immediately updates the global ~my~ package -- allowing you to quickly add new modules without having to re-install.
If you install both directories as editable packages (which has the benefit of any changes you making in either repository immediately updating the globally installed ~my~ package), there are some concerns with which editable install appears on your ~sys.path~ first. If you wanted your modules to override the upstream modules, yours would have to appear on the ~sys.path~ first (this is the same reason that =custom_lastfm_overlay= must be at the front of your ~PYTHONPATH~). For more details and examples on dealing with editable namespace packages in the context of HPI, see the [[https://github.com/purarue/reorder_editable][reorder_editable]] repository.
If you install both directories as editable packages (which has the benefit of any changes you making in either repository immediately updating the globally installed ~my~ package), there are some concerns with which editable install appears on your ~sys.path~ first. If you wanted your modules to override the upstream modules, yours would have to appear on the ~sys.path~ first (this is the same reason that =custom_lastfm_overlay= must be at the front of your ~PYTHONPATH~). For more details and examples on dealing with editable namespace packages in the context of HPI, see the [[https://github.com/seanbreckenridge/reorder_editable][reorder_editable]] repository.
There is no limit to how many directories you could install into a single namespace package, which could be a possible way for people to install additional HPI modules, without worrying about the module count here becoming too large to manage.
There are some other users [[https://github.com/hpi/hpi][who have begun publishing their own modules]] as namespace packages, which you could potentially install and use, in addition to this repository, if any of those interest you. If you want to create your own you can use the [[https://github.com/purarue/HPI-template][template]] to get started.
There are some other users [[https://github.com/hpi/hpi][who have begun publishing their own modules]] as namespace packages, which you could potentially install and use, in addition to this repository, if any of those interest you. If you want to create your own you can use the [[https://github.com/seanbreckenridge/HPI-template][template]] to get started.
Though, enabling this many modules may make ~hpi doctor~ look pretty busy. You can explicitly choose to enable/disable modules with a list of modules/regexes in your [[https://github.com/karlicoss/HPI/blob/f559e7cb899107538e6c6bbcf7576780604697ef/my/core/core_config.py#L24-L55][core config]], see [[https://github.com/purarue/dotfiles/blob/a1a77c581de31bd55a6af3d11b8af588614a207e/.config/my/my/config/__init__.py#L42-L72][here]] for an example.
Though, enabling this many modules may make ~hpi doctor~ look pretty busy. You can explicitly choose to enable/disable modules with a list of modules/regexes in your [[https://github.com/karlicoss/HPI/blob/f559e7cb899107538e6c6bbcf7576780604697ef/my/core/core_config.py#L24-L55][core config]], see [[https://github.com/seanbreckenridge/dotfiles/blob/a1a77c581de31bd55a6af3d11b8af588614a207e/.config/my/my/config/__init__.py#L42-L72][here]] for an example.
You may use the other modules or [[https://github.com/karlicoss/hpi-personal-overlay][my overlay]] as reference, but python packaging is already a complicated issue, before adding complexities like namespace packages and editable installs on top of it... If you're having trouble extending HPI in this fashion, you can open an issue here, preferably with a link to your code/repository and/or ~setup.py~ you're trying to use.

View file

@ -10,7 +10,7 @@ Relevant discussion about overlays: https://github.com/karlicoss/HPI/issues/102
# You can see them TODO in overlays dir
Consider a toy package/module structure with minimal code, without any actual data parsing, just for demonstration purposes.
Consider a toy package/module structure with minimal code, wihout any actual data parsing, just for demonstration purposes.
- =main= package structure
# TODO do links
@ -19,7 +19,7 @@ Consider a toy package/module structure with minimal code, without any actual da
Extracts Twitter data from GDPR archive.
- =my/twitter/all.py=
Merges twitter data from multiple sources (only =gdpr= in this case), so data consumers are agnostic of specific data sources used.
This will be overridden by =overlay=.
This will be overriden by =overlay=.
- =my/twitter/common.py=
Contains helper function to merge data, so they can be reused by overlay's =all.py=.
- =my/reddit.py=
@ -66,7 +66,7 @@ This basically means that modules will be searched in both paths, with overlay t
** Installing with =--use-pep517=
See here for discussion https://github.com/purarue/reorder_editable/issues/2, but TLDR it should work similarly.
See here for discussion https://github.com/seanbreckenridge/reorder_editable/issues/2, but TLDR it should work similarly.
* Testing runtime behaviour (editable install)
@ -126,7 +126,7 @@ https://github.com/python/mypy/blob/1dd8e7fe654991b01bd80ef7f1f675d9e3910c3a/myp
For now, I opened an issue in mypy repository https://github.com/python/mypy/issues/16683
But ok, maybe mypy treats =main= as an external package somehow but still type checks it properly?
But ok, maybe mypy treats =main= as an external package somhow but still type checks it properly?
Let's see what's going on with imports:
: $ mypy --namespace-packages --strict -p my --follow-imports=error

View file

@ -97,9 +97,9 @@ By default, this just returns the items in the order they were returned by the f
hpi query my.coding.commits.commits --order-key committed_dt --limit 1 --reverse --output pprint --stream
Commit(committed_dt=datetime.datetime(2023, 4, 14, 23, 9, 1, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=61200))),
authored_dt=datetime.datetime(2023, 4, 14, 23, 4, 1, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=61200))),
message='sources.smscalls: propagate errors if there are breaking '
message='sources.smscalls: propogate errors if there are breaking '
'schema changes',
repo='/home/username/Repos/promnesia-fork',
repo='/home/sean/Repos/promnesia-fork',
sha='22a434fca9a28df9b0915ccf16368df129d2c9ce',
ref='refs/heads/smscalls-handle-result')
```
@ -195,7 +195,7 @@ To preview, you can use something like [`qgis`](https://qgis.org/en/site/) or fo
<img src="https://user-images.githubusercontent.com/7804791/232249184-7e203ee6-a3ec-4053-800c-751d2c28e690.png" width=500 alt="chicago trip" />
(Sidenote: this is [`@purarue`](https://github.com/purarue/)s locations, on a trip to Chicago)
(Sidenote: this is [`@seanbreckenridge`](https://github.com/seanbreckenridge/)s locations, on a trip to Chicago)
## Python reference
@ -301,4 +301,4 @@ The `hpi query` command is a CLI wrapper around the code in [`query.py`](../my/c
If you specify a range, drop_unsorted is forced to be True
```
Those can be imported and accept any sort of iterator, `hpi query` just defaults to the output of functions here. As an example, see [`listens`](https://github.com/purarue/HPI-personal/blob/master/scripts/listens) which just passes an generator (iterator) as the first argument to `query_range`
Those can be imported and accept any sort of iterator, `hpi query` just defaults to the output of functions here. As an example, see [`listens`](https://github.com/seanbreckenridge/HPI-personal/blob/master/scripts/listens) which just passes an generator (iterator) as the first argument to `query_range`

View file

@ -387,7 +387,7 @@ But there is an extra caveat: rexport is already coming with nice [[https://gith
Several other HPI modules are following a similar pattern: hypothesis, instapaper, pinboard, kobo, etc.
Since the [[https://github.com/karlicoss/rexport#api-limitations][reddit API has limited results]], you can use [[https://github.com/purarue/pushshift_comment_export][my.reddit.pushshift]] to access older reddit comments, which both then get merged into =my.reddit.all.comments=
Since the [[https://github.com/karlicoss/rexport#api-limitations][reddit API has limited results]], you can use [[https://github.com/seanbreckenridge/pushshift_comment_export][my.reddit.pushshift]] to access older reddit comments, which both then get merged into =my.reddit.all.comments=
** Twitter

View file

@ -32,6 +32,6 @@ ignore =
#
# as a reference:
# https://github.com/purarue/cookiecutter-template/blob/master/%7B%7Bcookiecutter.module_name%7D%7D/setup.cfg
# https://github.com/seanbreckenridge/cookiecutter-template/blob/master/%7B%7Bcookiecutter.module_name%7D%7D/setup.cfg
# and this https://github.com/karlicoss/HPI/pull/151
# find ./my | entr flake8 --ignore=E402,E501,E741,W503,E266,E302,E305,E203,E261,E252,E251,E221,W291,E225,E303,E702,E202,F841,E731,E306,E127 E722,E231 my | grep -v __NOT_HPI_MODULE__

View file

@ -2,22 +2,20 @@
[[https://github.com/nomeata/arbtt#arbtt-the-automatic-rule-based-time-tracker][Arbtt]] time tracking
'''
from __future__ import annotations
REQUIRES = ['ijson', 'cffi']
# NOTE likely also needs libyajl2 from apt or elsewhere?
from collections.abc import Iterable, Sequence
from dataclasses import dataclass
from pathlib import Path
from typing import Sequence, Iterable, List, Optional
def inputs() -> Sequence[Path]:
try:
from my.config import arbtt as user_config
except ImportError:
from my.core.warnings import low
from .core.warnings import low
low("Couldn't find 'arbtt' config section, falling back to the default capture.log (usually in HOME dir). Add 'arbtt' section with logfiles = '' to suppress this warning.")
return []
else:
@ -57,7 +55,7 @@ class Entry:
return fromisoformat(ds)
@property
def active(self) -> str | None:
def active(self) -> Optional[str]:
# NOTE: WIP, might change this in the future...
ait = (w for w in self.json['windows'] if w['active'])
a = next(ait, None)
@ -76,18 +74,17 @@ class Entry:
def entries() -> Iterable[Entry]:
inps = list(inputs())
base: list[PathIsh] = ['arbtt-dump', '--format=json']
base: List[PathIsh] = ['arbtt-dump', '--format=json']
cmds: list[list[PathIsh]]
cmds: List[List[PathIsh]]
if len(inps) == 0:
cmds = [base] # rely on default
else:
# otherwise, 'merge' them
cmds = [[*base, '--logfile', f] for f in inps]
from subprocess import PIPE, Popen
import ijson.backends.yajl2_cffi as ijson # type: ignore
from subprocess import Popen, PIPE
for cmd in cmds:
with Popen(cmd, stdout=PIPE) as p:
out = p.stdout; assert out is not None
@ -96,8 +93,8 @@ def entries() -> Iterable[Entry]:
def fill_influxdb() -> None:
from .core.freezer import Freezer
from .core.influxdb import magic_fill
from .core.freezer import Freezer
freezer = Freezer(Entry)
fit = (freezer.freeze(e) for e in entries())
# TODO crap, influxdb doesn't like None https://github.com/influxdata/influxdb/issues/7722
@ -109,8 +106,6 @@ def fill_influxdb() -> None:
magic_fill(fit, name=f'{entries.__module__}:{entries.__name__}')
from .core import Stats, stat
from .core import stat, Stats
def stats() -> Stats:
return stat(entries)

View file

@ -2,17 +2,14 @@
[[https://bluemaestro.com/products/product-details/bluetooth-environmental-monitor-and-logger][Bluemaestro]] temperature/humidity/pressure monitor
"""
from __future__ import annotations
# todo most of it belongs to DAL... but considering so few people use it I didn't bother for now
import re
import sqlite3
from abc import abstractmethod
from collections.abc import Iterable, Sequence
from dataclasses import dataclass
from datetime import datetime, timedelta
from pathlib import Path
from typing import Protocol
from typing import Iterable, Optional, Protocol, Sequence, Set
import pytz
@ -90,17 +87,17 @@ def measurements() -> Iterable[Res[Measurement]]:
total = len(paths)
width = len(str(total))
last: datetime | None = None
last: Optional[datetime] = None
# tables are immutable, so can save on processing..
processed_tables: set[str] = set()
processed_tables: Set[str] = set()
for idx, path in enumerate(paths):
logger.info(f'processing [{idx:>{width}}/{total:>{width}}] {path}')
tot = 0
new = 0
# todo assert increasing timestamp?
with sqlite_connect_immutable(path) as db:
db_dt: datetime | None = None
db_dt: Optional[datetime] = None
try:
datas = db.execute(
f'SELECT "{path.name}" as name, Time, Temperature, Humidity, Pressure, Dewpoint FROM data ORDER BY log_index'

View file

@ -2,42 +2,41 @@
Blood tracking (manual org-mode entries)
"""
from __future__ import annotations
from collections.abc import Iterable
from datetime import datetime
from typing import NamedTuple
import orgparse
import pandas as pd
from my.config import blood as config # type: ignore[attr-defined]
from typing import Iterable, NamedTuple, Optional
from ..core.error import Res
from ..core.orgmode import one_table, parse_org_datetime
from ..core.orgmode import parse_org_datetime, one_table
import pandas as pd
import orgparse
from my.config import blood as config # type: ignore[attr-defined]
class Entry(NamedTuple):
dt: datetime
ketones : float | None=None
glucose : float | None=None
ketones : Optional[float]=None
glucose : Optional[float]=None
vitamin_d : float | None=None
vitamin_b12 : float | None=None
vitamin_d : Optional[float]=None
vitamin_b12 : Optional[float]=None
hdl : float | None=None
ldl : float | None=None
triglycerides: float | None=None
hdl : Optional[float]=None
ldl : Optional[float]=None
triglycerides: Optional[float]=None
source : str | None=None
extra : str | None=None
source : Optional[str]=None
extra : Optional[str]=None
Result = Res[Entry]
def try_float(s: str) -> float | None:
def try_float(s: str) -> Optional[float]:
l = s.split()
if len(l) == 0:
return None
@ -106,7 +105,6 @@ def blood_tests_data() -> Iterable[Result]:
def data() -> Iterable[Result]:
from itertools import chain
from ..core.error import sort_res_by
datas = chain(glucose_ketones_data(), blood_tests_data())
return sort_res_by(datas, key=lambda e: e.dt)

View file

@ -7,10 +7,10 @@ from ...core.pandas import DataFrameT, check_dataframe
@check_dataframe
def dataframe() -> DataFrameT:
# this should be somehow more flexible...
import pandas as pd
from ...endomondo import dataframe as EDF
from ...runnerup import dataframe as RDF
import pandas as pd
return pd.concat([
EDF(),
RDF(),

View file

@ -3,6 +3,7 @@ Cardio data, filtered from various data sources
'''
from ...core.pandas import DataFrameT, check_dataframe
CARDIO = {
'Running',
'Running, treadmill',

View file

@ -5,18 +5,16 @@ This is probably too specific to my needs, so later I will move it away to a per
For now it's worth keeping it here as an example and perhaps utility functions might be useful for other HPI modules.
'''
from __future__ import annotations
from datetime import datetime, timedelta
from typing import Optional
import pytz
from ...core.pandas import DataFrameT, check_dataframe as cdf
from ...core.orgmode import collect, Table, parse_org_datetime, TypedTable
from my.config import exercise as config
from ...core.orgmode import Table, TypedTable, collect, parse_org_datetime
from ...core.pandas import DataFrameT
from ...core.pandas import check_dataframe as cdf
import pytz
# FIXME how to attach it properly?
tz = pytz.timezone('Europe/London')
@ -116,7 +114,7 @@ def dataframe() -> DataFrameT:
rows.append(rd) # presumably has an error set
continue
idx: int | None
idx: Optional[int]
close = edf[edf['start_time'].apply(lambda t: pd_date_diff(t, mdate)).abs() < _DELTA]
if len(close) == 0:
idx = None
@ -165,9 +163,7 @@ def dataframe() -> DataFrameT:
# TODO wtf?? where is speed coming from??
from ...core import Stats, stat
from ...core import stat, Stats
def stats() -> Stats:
return stat(cross_trainer_data)

View file

@ -1,6 +1,5 @@
from ...core import Stats, stat
from ...core.pandas import DataFrameT
from ...core.pandas import check_dataframe as cdf
from ...core import stat, Stats
from ...core.pandas import DataFrameT, check_dataframe as cdf
class Combine:

View file

@ -1,6 +1,7 @@
from ... import emfit, jawbone
from .common import Combine
from ... import jawbone
from ... import emfit
from .common import Combine
_combined = Combine([
jawbone,
emfit,

View file

@ -2,15 +2,15 @@
Weight data (manually logged)
'''
from collections.abc import Iterator
from dataclasses import dataclass
from datetime import datetime
from typing import Any
from typing import Any, Iterator
from my import orgmode
from my.core import make_logger
from my.core.error import Res, extract_error_datetime, set_error_datetime
from my import orgmode
config = Any

View file

@ -1,6 +1,7 @@
from my.core import warnings
from ..core import warnings
warnings.high('my.books.kobo is deprecated! Please use my.kobo instead!')
from my.core.util import __NOT_HPI_MODULE__
from my.kobo import *
from ..core.util import __NOT_HPI_MODULE__
from ..kobo import * # type: ignore[no-redef]

View file

@ -1,5 +1,5 @@
"""
Parses active browser history by backing it up with [[http://github.com/purarue/sqlite_backup][sqlite_backup]]
Parses active browser history by backing it up with [[http://github.com/seanbreckenridge/sqlite_backup][sqlite_backup]]
"""
REQUIRES = ["browserexport", "sqlite_backup"]
@ -19,18 +19,16 @@ class config(user_config.active_browser):
export_path: Paths
from collections.abc import Iterator, Sequence
from pathlib import Path
from typing import Sequence, Iterator
from browserexport.merge import Visit, read_visits
from my.core import get_files, Stats, make_logger
from browserexport.merge import read_visits, Visit
from sqlite_backup import sqlite_backup
from my.core import Stats, get_files, make_logger
logger = make_logger(__name__)
from .common import _patch_browserexport_logs
_patch_browserexport_logs(logger.level)

View file

@ -1,9 +1,9 @@
from collections.abc import Iterator
from browserexport.merge import Visit, merge_visits
from typing import Iterator
from my.core import Stats
from my.core.source import import_source
from browserexport.merge import merge_visits, Visit
src_export = import_source(module_name="my.browser.export")
src_active = import_source(module_name="my.browser.active_browser")

View file

@ -1,15 +1,14 @@
"""
Parses browser history using [[http://github.com/purarue/browserexport][browserexport]]
Parses browser history using [[http://github.com/seanbreckenridge/browserexport][browserexport]]
"""
REQUIRES = ["browserexport"]
from collections.abc import Iterator, Sequence
from dataclasses import dataclass
from pathlib import Path
from typing import Iterator, Sequence
from browserexport.merge import Visit, read_and_merge
import my.config
from my.core import (
Paths,
Stats,
@ -19,9 +18,9 @@ from my.core import (
)
from my.core.cachew import mcachew
from .common import _patch_browserexport_logs
from browserexport.merge import read_and_merge, Visit
import my.config # isort: skip
from .common import _patch_browserexport_logs
@dataclass

View file

@ -3,24 +3,24 @@ Bumble data from Android app database (in =/data/data/com.bumble.app/databases/C
"""
from __future__ import annotations
from collections.abc import Iterator, Sequence
from dataclasses import dataclass
from datetime import datetime
from pathlib import Path
from typing import Iterator, Sequence, Optional, Dict
from more_itertools import unique_everseen
from my.core import Paths, get_files
from my.config import bumble as user_config # isort: skip
from my.config import bumble as user_config
from ..core import Paths
@dataclass
class config(user_config.android):
# paths[s]/glob to the exported sqlite databases
export_path: Paths
from ..core import get_files
from pathlib import Path
def inputs() -> Sequence[Path]:
return get_files(config.export_path)
@ -43,23 +43,21 @@ class _BaseMessage:
@dataclass(unsafe_hash=True)
class _Message(_BaseMessage):
conversation_id: str
reply_to_id: str | None
reply_to_id: Optional[str]
@dataclass(unsafe_hash=True)
class Message(_BaseMessage):
person: Person
reply_to: Message | None
reply_to: Optional[Message]
import json
import sqlite3
from typing import Union
from my.core.compat import assert_never
from ..core import Res
from ..core.sqlite import select, sqlite_connect_immutable
import sqlite3
from ..core.sqlite import sqlite_connect_immutable, select
from my.core.compat import assert_never
EntitiesRes = Res[Union[Person, _Message]]
@ -122,8 +120,8 @@ _UNKNOWN_PERSON = "UNKNOWN_PERSON"
def messages() -> Iterator[Res[Message]]:
id2person: dict[str, Person] = {}
id2msg: dict[str, Message] = {}
id2person: Dict[str, Person] = {}
id2msg: Dict[str, Message] = {}
for x in unique_everseen(_entities(), key=_key):
if isinstance(x, Exception):
yield x

View file

@ -16,7 +16,6 @@ from my.core.time import zone_to_countrycode
@lru_cache(1)
def _calendar():
from workalendar.registry import registry # type: ignore
# todo switch to using time.tz.main once _get_tz stabilizes?
from ..time.tz import via_location as LTZ
# TODO would be nice to do it dynamically depending on the past timezones...

View file

@ -1,6 +1,7 @@
import my.config as config
from .core import __NOT_HPI_MODULE__
from .core import warnings as W
# still used in Promnesia, maybe in dashboard?

View file

@ -1,12 +1,13 @@
import json
from collections.abc import Iterator, Sequence
from dataclasses import dataclass
from datetime import datetime, timezone
from functools import cached_property
import json
from pathlib import Path
from typing import Dict, Iterator, Sequence
from my.core import get_files, Res, datetime_aware
from my.config import codeforces as config # type: ignore[attr-defined]
from my.core import Res, datetime_aware, get_files
def inputs() -> Sequence[Path]:
@ -38,7 +39,7 @@ class Competition:
class Parser:
def __init__(self, *, inputs: Sequence[Path]) -> None:
self.inputs = inputs
self.contests: dict[ContestId, Contest] = {}
self.contests: Dict[ContestId, Contest] = {}
def _parse_allcontests(self, p: Path) -> Iterator[Contest]:
j = json.loads(p.read_text())

View file

@ -1,32 +1,29 @@
"""
Git commits data for repositories on your filesystem
"""
from __future__ import annotations
REQUIRES = [
'gitpython',
]
import shutil
from collections.abc import Iterator, Sequence
from dataclasses import dataclass, field
from datetime import datetime, timezone
from pathlib import Path
from typing import Optional, cast
from my.core import LazyLogger, PathIsh, make_config
import shutil
from pathlib import Path
from datetime import datetime, timezone
from dataclasses import dataclass, field
from typing import List, Optional, Iterator, Set, Sequence, cast
from my.core import PathIsh, LazyLogger, make_config
from my.core.cachew import cache_dir, mcachew
from my.core.warnings import high
from my.config import commits as user_config # isort: skip
from my.config import commits as user_config
@dataclass
class commits_cfg(user_config):
roots: Sequence[PathIsh] = field(default_factory=list)
emails: Sequence[str] | None = None
names: Sequence[str] | None = None
emails: Optional[Sequence[str]] = None
names: Optional[Sequence[str]] = None
# experiment to make it lazy?
@ -43,6 +40,7 @@ def config() -> commits_cfg:
import git
from git.repo.fun import is_git_dir
log = LazyLogger(__name__, level='info')
@ -95,7 +93,7 @@ def _git_root(git_dir: PathIsh) -> Path:
return gd # must be bare
def _repo_commits_aux(gr: git.Repo, rev: str, emitted: set[str]) -> Iterator[Commit]:
def _repo_commits_aux(gr: git.Repo, rev: str, emitted: Set[str]) -> Iterator[Commit]:
# without path might not handle pull heads properly
for c in gr.iter_commits(rev=rev):
if not by_me(c):
@ -122,7 +120,7 @@ def _repo_commits_aux(gr: git.Repo, rev: str, emitted: set[str]) -> Iterator[Com
def repo_commits(repo: PathIsh):
gr = git.Repo(str(repo))
emitted: set[str] = set()
emitted: Set[str] = set()
for r in gr.references:
yield from _repo_commits_aux(gr=gr, rev=r.path, emitted=emitted)
@ -143,14 +141,14 @@ def canonical_name(repo: Path) -> str:
def _fd_path() -> str:
# todo move it to core
fd_path: str | None = shutil.which("fdfind") or shutil.which("fd-find") or shutil.which("fd")
fd_path: Optional[str] = shutil.which("fdfind") or shutil.which("fd-find") or shutil.which("fd")
if fd_path is None:
high("my.coding.commits requires 'fd' to be installed, See https://github.com/sharkdp/fd#installation")
assert fd_path is not None
return fd_path
def git_repos_in(roots: list[Path]) -> list[Path]:
def git_repos_in(roots: List[Path]) -> List[Path]:
from subprocess import check_output
outputs = check_output([
_fd_path(),
@ -174,7 +172,7 @@ def git_repos_in(roots: list[Path]) -> list[Path]:
return repos
def repos() -> list[Path]:
def repos() -> List[Path]:
return git_repos_in(list(map(Path, config().roots)))
@ -192,7 +190,7 @@ def _repo_depends_on(_repo: Path) -> int:
raise RuntimeError(f"Could not find a FETCH_HEAD/HEAD file in {_repo}")
def _commits(_repos: list[Path]) -> Iterator[Commit]:
def _commits(_repos: List[Path]) -> Iterator[Commit]:
for r in _repos:
yield from _cached_commits(r)

View file

@ -1,6 +1,6 @@
from .core.warnings import high
high("DEPRECATED! Please use my.core.common instead.")
from .core import __NOT_HPI_MODULE__
from .core.common import *

View file

@ -9,18 +9,17 @@ This file is used for:
- mypy: this file provides some type annotations
- for loading the actual user config
'''
from __future__ import annotations
#### NOTE: you won't need this line VVVV in your personal config
from my.core import init # noqa: F401 # isort: skip
from my.core import init # noqa: F401
###
from datetime import tzinfo
from pathlib import Path
from typing import List
from my.core import PathIsh, Paths
from my.core import Paths, PathIsh
class hypothesis:
@ -76,16 +75,14 @@ class google:
takeout_path: Paths = ''
from collections.abc import Sequence
from datetime import date, datetime, timedelta
from typing import Union
from typing import Sequence, Union, Tuple
from datetime import datetime, date, timedelta
DateIsh = Union[datetime, date, str]
LatLon = tuple[float, float]
LatLon = Tuple[float, float]
class location:
# todo ugh, need to think about it... mypy wants the type here to be general, otherwise it can't deduce
# and we can't import the types from the module itself, otherwise would be circular. common module?
home: LatLon | Sequence[tuple[DateIsh, LatLon]] = (1.0, -1.0)
home: Union[LatLon, Sequence[Tuple[DateIsh, LatLon]]] = (1.0, -1.0)
home_accuracy = 30_000.0
class via_ip:
@ -106,8 +103,6 @@ class location:
from typing import Literal
class time:
class tz:
policy: Literal['keep', 'convert', 'throw']
@ -126,9 +121,10 @@ class arbtt:
logfiles: Paths
from typing import Optional
class commits:
emails: Sequence[str] | None
names: Sequence[str] | None
emails: Optional[Sequence[str]]
names: Optional[Sequence[str]]
roots: Sequence[PathIsh]
@ -154,8 +150,8 @@ class tinder:
class instagram:
class android:
export_path: Paths
username: str | None
full_name: str | None
username: Optional[str]
full_name: Optional[str]
class gdpr:
export_path: Paths
@ -173,7 +169,7 @@ class materialistic:
class fbmessenger:
class fbmessengerexport:
export_db: PathIsh
facebook_id: str | None
facebook_id: Optional[str]
class android:
export_path: Paths
@ -251,7 +247,7 @@ class runnerup:
class emfit:
export_path: Path
timezone: tzinfo
excluded_sids: list[str]
excluded_sids: List[str]
class foursquare:
@ -274,7 +270,7 @@ class roamresearch:
class whatsapp:
class android:
export_path: Paths
my_user_id: str | None
my_user_id: Optional[str]
class harmonic:

View file

@ -4,7 +4,7 @@ from typing import TYPE_CHECKING
from .cfg import make_config
from .common import PathIsh, Paths, get_files
from .compat import assert_never
from .error import Res, notnone, unwrap
from .error import Res, unwrap, notnone
from .logging import (
make_logger,
)
@ -29,25 +29,22 @@ if not TYPE_CHECKING:
__all__ = [
'__NOT_HPI_MODULE__',
'get_files', 'PathIsh', 'Paths',
'Json',
'LazyLogger', # legacy import
'Path',
'PathIsh',
'Paths',
'Res',
'Stats',
'assert_never', # TODO maybe deprecate from use in my.core? will be in stdlib soon
'dataclass',
'datetime_aware',
'datetime_naive',
'get_files',
'make_config',
'make_logger',
'notnone',
'stat',
'unwrap',
'LazyLogger', # legacy import
'warn_if_empty',
'stat', 'Stats',
'datetime_aware', 'datetime_naive',
'assert_never', # TODO maybe deprecate from use in my.core? will be in stdlib soon
'make_config',
'__NOT_HPI_MODULE__',
'Res', 'unwrap', 'notnone',
'dataclass', 'Path',
]
@ -55,7 +52,7 @@ __all__ = [
# you could put _init_hook.py next to your private my/config
# that way you can configure logging/warnings/env variables on every HPI import
try:
import my._init_hook # type: ignore[import-not-found] # noqa: F401
import my._init_hook # type: ignore[import-not-found]
except:
pass
##

View file

@ -1,5 +1,3 @@
from __future__ import annotations
import functools
import importlib
import inspect
@ -9,18 +7,17 @@ import shutil
import sys
import tempfile
import traceback
from collections.abc import Iterable, Sequence
from contextlib import ExitStack
from itertools import chain
from pathlib import Path
from subprocess import PIPE, CompletedProcess, Popen, check_call, run
from typing import Any, Callable
from typing import Any, Callable, Iterable, List, Optional, Sequence, Type
import click
@functools.lru_cache
def mypy_cmd() -> Sequence[str] | None:
def mypy_cmd() -> Optional[Sequence[str]]:
try:
# preferably, use mypy from current python env
import mypy # noqa: F401 fine not to use it
@ -35,7 +32,7 @@ def mypy_cmd() -> Sequence[str] | None:
return None
def run_mypy(cfg_path: Path) -> CompletedProcess | None:
def run_mypy(cfg_path: Path) -> Optional[CompletedProcess]:
# todo dunno maybe use the same mypy config in repository?
# I'd need to install mypy.ini then??
env = {**os.environ}
@ -66,28 +63,22 @@ def eprint(x: str) -> None:
# err=True prints to stderr
click.echo(x, err=True)
def indent(x: str) -> str:
# todo use textwrap.indent?
return ''.join(' ' + l for l in x.splitlines(keepends=True))
OK = ''
OFF = '🔲'
def info(x: str) -> None:
eprint(OK + ' ' + x)
def error(x: str) -> None:
eprint('' + x)
def warning(x: str) -> None:
eprint('' + x) # todo yellow?
def tb(e: Exception) -> None:
tb = ''.join(traceback.format_exception(Exception, e, e.__traceback__))
sys.stderr.write(indent(tb))
@ -95,7 +86,6 @@ def tb(e: Exception) -> None:
def config_create() -> None:
from .preinit import get_mycfg_dir
mycfg_dir = get_mycfg_dir()
created = False
@ -104,8 +94,7 @@ def config_create() -> None:
my_config = mycfg_dir / 'my' / 'config' / '__init__.py'
my_config.parent.mkdir(parents=True)
my_config.write_text(
'''
my_config.write_text('''
### HPI personal config
## see
# https://github.com/karlicoss/HPI/blob/master/doc/SETUP.org#setting-up-modules
@ -128,8 +117,7 @@ class example:
### you can insert your own configuration below
### but feel free to delete the stuff above if you don't need ti
'''.lstrip()
)
'''.lstrip())
info(f'created empty config: {my_config}')
created = True
else:
@ -142,13 +130,12 @@ class example:
# todo return the config as a result?
def config_ok() -> bool:
errors: list[Exception] = []
errors: List[Exception] = []
# at this point 'my' should already be imported, so doesn't hurt to extract paths from it
import my
try:
paths: list[str] = list(my.__path__)
paths: List[str] = list(my.__path__)
except Exception as e:
errors.append(e)
error('failed to determine module import path')
@ -158,23 +145,19 @@ def config_ok() -> bool:
# first try doing as much as possible without actually importing my.config
from .preinit import get_mycfg_dir
cfg_path = get_mycfg_dir()
# alternative is importing my.config and then getting cfg_path from its __file__/__path__
# not sure which is better tbh
## check we're not using stub config
import my.core
try:
core_pkg_path = str(Path(my.core.__path__[0]).parent)
if str(cfg_path).startswith(core_pkg_path):
error(
f'''
error(f'''
Seems that the stub config is used ({cfg_path}). This is likely not going to work.
See https://github.com/karlicoss/HPI/blob/master/doc/SETUP.org#setting-up-modules for more information
'''.strip()
)
'''.strip())
errors.append(RuntimeError('bad config path'))
except Exception as e:
errors.append(e)
@ -238,7 +221,7 @@ See https://github.com/karlicoss/HPI/blob/master/doc/SETUP.org#setting-up-module
from .util import HPIModule, modules
def _modules(*, all: bool = False) -> Iterable[HPIModule]:
def _modules(*, all: bool=False) -> Iterable[HPIModule]:
skipped = []
for m in modules():
if not all and m.skip_reason is not None:
@ -249,7 +232,7 @@ def _modules(*, all: bool = False) -> Iterable[HPIModule]:
warning(f'Skipped {len(skipped)} modules: {skipped}. Pass --all if you want to see them.')
def modules_check(*, verbose: bool, list_all: bool, quick: bool, for_modules: list[str]) -> None:
def modules_check(*, verbose: bool, list_all: bool, quick: bool, for_modules: List[str]) -> None:
if len(for_modules) > 0:
# if you're checking specific modules, show errors
# hopefully makes sense?
@ -340,20 +323,17 @@ def tabulate_warnings() -> None:
Helper to avoid visual noise in hpi modules/doctor
'''
import warnings
orig = warnings.formatwarning
def override(*args, **kwargs) -> str:
res = orig(*args, **kwargs)
return ''.join(' ' + x for x in res.splitlines(keepends=True))
warnings.formatwarning = override
# TODO loggers as well?
def _requires(modules: Sequence[str]) -> Sequence[str]:
from .discovery_pure import module_by_name
mods = [module_by_name(module) for module in modules]
res = []
for mod in mods:
@ -380,7 +360,7 @@ def module_requires(*, module: Sequence[str]) -> None:
click.echo(x)
def module_install(*, user: bool, module: Sequence[str], parallel: bool = False, break_system_packages: bool = False) -> None:
def module_install(*, user: bool, module: Sequence[str], parallel: bool=False, break_system_packages: bool=False) -> None:
if isinstance(module, str):
# legacy behavior, used to take a since argument
module = [module]
@ -457,7 +437,7 @@ def _ui_getchar_pick(choices: Sequence[str], prompt: str = 'Select from: ') -> i
return result_map[ch]
def _locate_functions_or_prompt(qualified_names: list[str], *, prompt: bool = True) -> Iterable[Callable[..., Any]]:
def _locate_functions_or_prompt(qualified_names: List[str], *, prompt: bool = True) -> Iterable[Callable[..., Any]]:
from .query import QueryException, locate_qualified_function
from .stats import is_data_provider
@ -507,7 +487,6 @@ def _locate_functions_or_prompt(qualified_names: list[str], *, prompt: bool = Tr
def _warn_exceptions(exc: Exception) -> None:
from my.core import make_logger
logger = make_logger('CLI', level='warning')
logger.exception(f'hpi query: {exc}')
@ -519,14 +498,14 @@ def query_hpi_functions(
*,
output: str = 'json',
stream: bool = False,
qualified_names: list[str],
order_key: str | None,
order_by_value_type: type | None,
qualified_names: List[str],
order_key: Optional[str],
order_by_value_type: Optional[Type],
after: Any,
before: Any,
within: Any,
reverse: bool = False,
limit: int | None,
limit: Optional[int],
drop_unsorted: bool,
wrap_unsorted: bool,
warn_exceptions: bool,
@ -538,9 +517,6 @@ def query_hpi_functions(
# chain list of functions from user, in the order they wrote them on the CLI
input_src = chain(*(f() for f in _locate_functions_or_prompt(qualified_names)))
# NOTE: if passing just one function to this which returns a single namedtuple/dataclass,
# using both --order-key and --order-type will often be faster as it does not need to
# duplicate the iterator in memory, or try to find the --order-type type on each object before sorting
res = select_range(
input_src,
order_key=order_key,
@ -553,8 +529,7 @@ def query_hpi_functions(
warn_exceptions=warn_exceptions,
warn_func=_warn_exceptions,
raise_exceptions=raise_exceptions,
drop_exceptions=drop_exceptions,
)
drop_exceptions=drop_exceptions)
if output == 'json':
from .serialize import dumps
@ -605,7 +580,6 @@ def query_hpi_functions(
except ModuleNotFoundError:
eprint("'repl' typically uses ipython, install it with 'python3 -m pip install ipython'. falling back to stdlib...")
import code
code.interact(local=locals())
else:
IPython.embed()
@ -645,13 +619,13 @@ def main(*, debug: bool) -> None:
@functools.lru_cache(maxsize=1)
def _all_mod_names() -> list[str]:
def _all_mod_names() -> List[str]:
"""Should include all modules, in case user is trying to diagnose issues"""
# sort this, so that the order doesn't change while tabbing through
return sorted([m.name for m in modules()])
def _module_autocomplete(ctx: click.Context, args: Sequence[str], incomplete: str) -> list[str]:
def _module_autocomplete(ctx: click.Context, args: Sequence[str], incomplete: str) -> List[str]:
return [m for m in _all_mod_names() if m.startswith(incomplete)]
@ -810,14 +784,14 @@ def query_cmd(
function_name: Sequence[str],
output: str,
stream: bool,
order_key: str | None,
order_type: str | None,
after: str | None,
before: str | None,
within: str | None,
recent: str | None,
order_key: Optional[str],
order_type: Optional[str],
after: Optional[str],
before: Optional[str],
within: Optional[str],
recent: Optional[str],
reverse: bool,
limit: int | None,
limit: Optional[int],
drop_unsorted: bool,
wrap_unsorted: bool,
warn_exceptions: bool,
@ -853,7 +827,7 @@ def query_cmd(
from datetime import date, datetime
chosen_order_type: type | None
chosen_order_type: Optional[Type]
if order_type == "datetime":
chosen_order_type = datetime
elif order_type == "date":
@ -889,8 +863,7 @@ def query_cmd(
wrap_unsorted=wrap_unsorted,
warn_exceptions=warn_exceptions,
raise_exceptions=raise_exceptions,
drop_exceptions=drop_exceptions,
)
drop_exceptions=drop_exceptions)
except QueryException as qe:
eprint(str(qe))
sys.exit(1)
@ -905,7 +878,6 @@ def query_cmd(
def test_requires() -> None:
from click.testing import CliRunner
result = CliRunner().invoke(main, ['module', 'requires', 'my.github.ghexport', 'my.browser.export'])
assert result.exit_code == 0
assert "github.com/karlicoss/ghexport" in result.output

View file

@ -10,18 +10,15 @@ how many cores we want to dedicate to the DAL.
Enabled by the env variable, specifying how many cores to dedicate
e.g. "HPI_CPU_POOL=4 hpi query ..."
"""
from __future__ import annotations
import os
from concurrent.futures import ProcessPoolExecutor
from typing import cast
from typing import Optional, cast
_NOT_SET = cast(ProcessPoolExecutor, object())
_INSTANCE: ProcessPoolExecutor | None = _NOT_SET
_INSTANCE: Optional[ProcessPoolExecutor] = _NOT_SET
def get_cpu_pool() -> ProcessPoolExecutor | None:
def get_cpu_pool() -> Optional[ProcessPoolExecutor]:
global _INSTANCE
if _INSTANCE is _NOT_SET:
use_cpu_pool = os.environ.get('HPI_CPU_POOL')

View file

@ -1,17 +1,16 @@
"""
Various helpers for compression
"""
# fmt: off
from __future__ import annotations
import io
import pathlib
from collections.abc import Iterator, Sequence
import sys
from datetime import datetime
from functools import total_ordering
from pathlib import Path
from typing import IO, Union
from typing import IO, Any, Iterator, Sequence, Union
PathIsh = Union[Path, str]

View file

@ -1,18 +1,16 @@
from __future__ import annotations
from .internal import assert_subpackage
assert_subpackage(__name__)
from .internal import assert_subpackage; assert_subpackage(__name__)
import logging
import sys
from collections.abc import Iterator
from contextlib import contextmanager
from pathlib import Path
from typing import (
TYPE_CHECKING,
Any,
Callable,
Iterator,
Optional,
Type,
TypeVar,
Union,
cast,
@ -23,6 +21,7 @@ import appdirs # type: ignore[import-untyped]
from . import warnings
PathIsh = Union[str, Path] # avoid circular import from .common
@ -61,12 +60,12 @@ def _appdirs_cache_dir() -> Path:
_CACHE_DIR_NONE_HACK = Path('/tmp/hpi/cachew_none_hack')
def cache_dir(suffix: PathIsh | None = None) -> Path:
def cache_dir(suffix: Optional[PathIsh] = None) -> Path:
from . import core_config as CC
cdir_ = CC.config.get_cache_dir()
sp: Path | None = None
sp: Optional[Path] = None
if suffix is not None:
sp = Path(suffix)
# guess if you do need absolute, better path it directly instead of as suffix?
@ -136,7 +135,7 @@ if TYPE_CHECKING:
CC = Callable[P, R] # need to give it a name, if inlined into bound=, mypy runs in a bug
PathProvider = Union[PathIsh, Callable[P, PathIsh]]
# NOTE: in cachew, HashFunction type returns str
# however in practice, cachew always calls str for its result
# however in practice, cachew alwasy calls str for its result
# so perhaps better to switch it to Any in cachew as well
HashFunction = Callable[P, Any]
@ -145,19 +144,21 @@ if TYPE_CHECKING:
# we need two versions due to @doublewrap
# this is when we just annotate as @cachew without any args
@overload # type: ignore[no-overload-impl]
def mcachew(fun: F) -> F: ...
def mcachew(fun: F) -> F:
...
@overload
def mcachew(
cache_path: PathProvider | None = ...,
cache_path: Optional[PathProvider] = ...,
*,
force_file: bool = ...,
cls: type | None = ...,
cls: Optional[Type] = ...,
depends_on: HashFunction = ...,
logger: logging.Logger | None = ...,
logger: Optional[logging.Logger] = ...,
chunk_by: int = ...,
synthetic_key: str | None = ...,
) -> Callable[[F], F]: ...
synthetic_key: Optional[str] = ...,
) -> Callable[[F], F]:
...
else:
mcachew = _mcachew_impl

View file

@ -3,28 +3,24 @@ from __future__ import annotations
import importlib
import re
import sys
from collections.abc import Iterator
from contextlib import ExitStack, contextmanager
from typing import Any, Callable, TypeVar
from typing import Any, Callable, Dict, Iterator, Optional, Type, TypeVar
Attrs = dict[str, Any]
Attrs = Dict[str, Any]
C = TypeVar('C')
# todo not sure about it, could be overthinking...
# but short enough to change later
# TODO document why it's necessary?
def make_config(cls: type[C], migration: Callable[[Attrs], Attrs] = lambda x: x) -> C:
def make_config(cls: Type[C], migration: Callable[[Attrs], Attrs]=lambda x: x) -> C:
user_config = cls.__base__
old_props = {
# NOTE: deliberately use gettatr to 'force' class properties here
k: getattr(user_config, k)
for k in vars(user_config)
k: getattr(user_config, k) for k in vars(user_config)
}
new_props = migration(old_props)
from dataclasses import fields
params = {
k: v
for k, v in new_props.items()
@ -55,8 +51,6 @@ def _override_config(config: F) -> Iterator[F]:
ModuleRegex = str
@contextmanager
def _reload_modules(modules: ModuleRegex) -> Iterator[None]:
# need to use list here, otherwise reordering with set might mess things up
@ -87,14 +81,13 @@ def _reload_modules(modules: ModuleRegex) -> Iterator[None]:
@contextmanager
def tmp_config(*, modules: ModuleRegex | None = None, config=None):
def tmp_config(*, modules: Optional[ModuleRegex]=None, config=None):
if modules is None:
assert config is None
if modules is not None:
assert config is not None
import my.config
with ExitStack() as module_reload_stack, _override_config(my.config) as new_config:
if config is not None:
overrides = {k: v for k, v in vars(config).items() if not k.startswith('__')}
@ -109,7 +102,6 @@ def tmp_config(*, modules: ModuleRegex | None = None, config=None):
def test_tmp_config() -> None:
class extra:
data_path = '/path/to/data'
with tmp_config() as c:
assert c.google != 'whatever'
assert not hasattr(c, 'extra')

View file

@ -1,18 +1,20 @@
from __future__ import annotations
import os
from collections.abc import Iterable, Sequence
from glob import glob as do_glob
from pathlib import Path
from typing import (
TYPE_CHECKING,
Callable,
Generic,
Iterable,
List,
Sequence,
Tuple,
TypeVar,
Union,
)
from . import compat, warnings
from . import compat
from . import warnings
# some helper functions
# TODO start deprecating this? soon we'd be able to use Path | str syntax which is shorter and more explicit
@ -22,22 +24,20 @@ Paths = Union[Sequence[PathIsh], PathIsh]
DEFAULT_GLOB = '*'
def get_files(
pp: Paths,
glob: str = DEFAULT_GLOB,
glob: str=DEFAULT_GLOB,
*,
sort: bool = True,
guess_compression: bool = True,
) -> tuple[Path, ...]:
sort: bool=True,
guess_compression: bool=True,
) -> Tuple[Path, ...]:
"""
Helper function to avoid boilerplate.
Tuple as return type is a bit friendlier for hashing/caching, so hopefully makes sense
"""
# TODO FIXME mm, some wrapper to assert iterator isn't empty?
sources: list[Path]
sources: List[Path]
if isinstance(pp, Path):
sources = [pp]
elif isinstance(pp, str):
@ -54,7 +54,7 @@ def get_files(
# TODO ugh. very flaky... -3 because [<this function>, get_files(), <actual caller>]
return traceback.extract_stack()[-3].filename
paths: list[Path] = []
paths: List[Path] = []
for src in sources:
if src.parts[0] == '~':
src = src.expanduser()
@ -63,8 +63,8 @@ def get_files(
if '*' in gs:
if glob != DEFAULT_GLOB:
warnings.medium(f"{caller()}: treating {gs} as glob path. Explicit glob={glob} argument is ignored!")
paths.extend(map(Path, do_glob(gs))) # noqa: PTH207
elif os.path.isdir(str(src)): # noqa: PTH112
paths.extend(map(Path, do_glob(gs)))
elif os.path.isdir(str(src)):
# NOTE: we're using os.path here on purpose instead of src.is_dir
# the reason is is_dir for archives might return True and then
# this clause would try globbing insize the archives
@ -157,7 +157,7 @@ def get_valid_filename(s: str) -> str:
# TODO deprecate and suggest to use one from my.core directly? not sure
from .utils.itertools import unique_everseen # noqa: F401
from .utils.itertools import unique_everseen
### legacy imports, keeping them here for backwards compatibility
## hiding behind TYPE_CHECKING so it works in runtime
@ -234,14 +234,16 @@ if not TYPE_CHECKING:
return types.asdict(*args, **kwargs)
# todo wrap these in deprecated decorator as well?
# TODO hmm how to deprecate these in runtime?
# tricky cause they are actually classes/types
from typing import Literal # noqa: F401
from .cachew import mcachew # noqa: F401
# this is kinda internal, should just use my.core.logging.setup_logger if necessary
from .logging import setup_logger
# TODO hmm how to deprecate these in runtime?
# tricky cause they are actually classes/types
from typing import Literal # noqa: F401
from .stats import Stats
from .types import (
Json,

View file

@ -3,8 +3,6 @@ Contains backwards compatibility helpers for different python versions.
If something is relevant to HPI itself, please put it in .hpi_compat instead
'''
from __future__ import annotations
import sys
from typing import TYPE_CHECKING
@ -31,7 +29,6 @@ if not TYPE_CHECKING:
@deprecated('use .removesuffix method on string directly instead')
def removesuffix(text: str, suffix: str) -> str:
return text.removesuffix(suffix)
##
## used to have compat function before 3.8 for these, keeping for runtime back compatibility
@ -49,13 +46,13 @@ else:
# bisect_left doesn't have a 'key' parameter (which we use)
# till python3.10
if sys.version_info[:2] <= (3, 9):
from typing import Any, Callable, List, Optional, TypeVar # noqa: UP035
from typing import Any, Callable, List, Optional, TypeVar
X = TypeVar('X')
# copied from python src
# fmt: off
def bisect_left(a: list[Any], x: Any, lo: int=0, hi: int | None=None, *, key: Callable[..., Any] | None=None) -> int:
def bisect_left(a: List[Any], x: Any, lo: int=0, hi: Optional[int]=None, *, key: Optional[Callable[..., Any]]=None) -> int:
if lo < 0:
raise ValueError('lo must be non-negative')
if hi is None:

View file

@ -2,21 +2,18 @@
Bindings for the 'core' HPI configuration
'''
from __future__ import annotations
import re
from collections.abc import Sequence
from dataclasses import dataclass
from pathlib import Path
from typing import Optional, Sequence
from . import warnings
from . import PathIsh, warnings
try:
from my.config import core as user_config # type: ignore[attr-defined]
except Exception as e:
try:
from my.config import common as user_config # type: ignore[attr-defined]
warnings.high("'common' config section is deprecated. Please rename it to 'core'.")
except Exception as e2:
# make it defensive, because it's pretty commonly used and would be annoying if it breaks hpi doctor etc.
@ -27,7 +24,6 @@ except Exception as e:
_HPI_CACHE_DIR_DEFAULT = ''
@dataclass
class Config(user_config):
'''
@ -38,7 +34,7 @@ class Config(user_config):
cache_dir = '/your/custom/cache/path'
'''
cache_dir: Path | str | None = _HPI_CACHE_DIR_DEFAULT
cache_dir: Optional[PathIsh] = _HPI_CACHE_DIR_DEFAULT
'''
Base directory for cachew.
- if None , means cache is disabled
@ -48,7 +44,7 @@ class Config(user_config):
NOTE: you shouldn't use this attribute in HPI modules directly, use Config.get_cache_dir()/cachew.cache_dir() instead
'''
tmp_dir: Path | str | None = None
tmp_dir: Optional[PathIsh] = None
'''
Path to a temporary directory.
This can be used temporarily while extracting zipfiles etc...
@ -56,36 +52,34 @@ class Config(user_config):
- otherwise , use the specified directory as the base temporary directory
'''
enabled_modules: Sequence[str] | None = None
enabled_modules : Optional[Sequence[str]] = None
'''
list of regexes/globs
- None means 'rely on disabled_modules'
'''
disabled_modules: Sequence[str] | None = None
disabled_modules: Optional[Sequence[str]] = None
'''
list of regexes/globs
- None means 'rely on enabled_modules'
'''
def get_cache_dir(self) -> Path | None:
def get_cache_dir(self) -> Optional[Path]:
cdir = self.cache_dir
if cdir is None:
return None
if cdir == _HPI_CACHE_DIR_DEFAULT:
from .cachew import _appdirs_cache_dir
return _appdirs_cache_dir()
else:
return Path(cdir).expanduser()
def get_tmp_dir(self) -> Path:
tdir: Path | str | None = self.tmp_dir
tdir: Optional[PathIsh] = self.tmp_dir
tpath: Path
# use tempfile if unset
if tdir is None:
import tempfile
tpath = Path(tempfile.gettempdir()) / 'HPI'
else:
tpath = Path(tdir)
@ -93,10 +87,10 @@ class Config(user_config):
tpath.mkdir(parents=True, exist_ok=True)
return tpath
def _is_module_active(self, module: str) -> bool | None:
def _is_module_active(self, module: str) -> Optional[bool]:
# None means the config doesn't specify anything
# todo might be nice to return the 'reason' too? e.g. which option has matched
def matches(specs: Sequence[str]) -> str | None:
def matches(specs: Sequence[str]) -> Optional[str]:
for spec in specs:
# not sure because . (packages separate) matches anything, but I guess unlikely to clash
if re.match(spec, module):
@ -127,8 +121,8 @@ config = make_config(Config)
### tests start
from collections.abc import Iterator
from contextlib import contextmanager as ctx
from typing import Iterator
@ctx
@ -169,5 +163,4 @@ def test_active_modules() -> None:
assert cc._is_module_active("my.body.exercise") is True
assert len(record_warnings) == 1
### tests end

View file

@ -5,25 +5,23 @@ A helper module for defining denylists for sources programmatically
For docs, see doc/DENYLIST.md
"""
from __future__ import annotations
import functools
import json
import sys
from collections import defaultdict
from collections.abc import Iterator, Mapping
from pathlib import Path
from typing import Any, TypeVar
from typing import Any, Dict, Iterator, List, Mapping, Set, TypeVar
import click
from more_itertools import seekable
from .serialize import dumps
from .warnings import medium
from my.core.common import PathIsh
from my.core.serialize import dumps
from my.core.warnings import medium
T = TypeVar("T")
DenyMap = Mapping[str, set[Any]]
DenyMap = Mapping[str, Set[Any]]
def _default_key_func(obj: T) -> str:
@ -31,9 +29,9 @@ def _default_key_func(obj: T) -> str:
class DenyList:
def __init__(self, denylist_file: Path | str) -> None:
def __init__(self, denylist_file: PathIsh):
self.file = Path(denylist_file).expanduser().absolute()
self._deny_raw_list: list[dict[str, Any]] = []
self._deny_raw_list: List[Dict[str, Any]] = []
self._deny_map: DenyMap = defaultdict(set)
# deny cli, user can override these
@ -47,7 +45,7 @@ class DenyList:
return
deny_map: DenyMap = defaultdict(set)
data: list[dict[str, Any]] = json.loads(self.file.read_text())
data: List[Dict[str, Any]]= json.loads(self.file.read_text())
self._deny_raw_list = data
for ignore in data:
@ -114,7 +112,7 @@ class DenyList:
self._load()
self._deny_raw({key: self._stringify_value(value)}, write=write)
def _deny_raw(self, data: dict[str, Any], *, write: bool = False) -> None:
def _deny_raw(self, data: Dict[str, Any], *, write: bool = False) -> None:
self._deny_raw_list.append(data)
if write:
self.write()
@ -133,7 +131,7 @@ class DenyList:
def _deny_cli_remember(
self,
items: Iterator[T],
mem: dict[str, T],
mem: Dict[str, T],
) -> Iterator[str]:
keyf = self._deny_cli_key_func or _default_key_func
# i.e., convert each item to a string, and map str -> item
@ -159,8 +157,10 @@ class DenyList:
# reset the iterator
sit.seek(0)
# so we can map the selected string from fzf back to the original objects
memory_map: dict[str, T] = {}
picker = FzfPrompt(executable_path=self.fzf_path, default_options="--no-multi")
memory_map: Dict[str, T] = {}
picker = FzfPrompt(
executable_path=self.fzf_path, default_options="--no-multi"
)
picked_l = picker.prompt(
self._deny_cli_remember(itr, memory_map),
"--read0",

View file

@ -10,8 +10,6 @@ This potentially allows it to be:
It should be free of external modules, importlib, exec, etc. etc.
'''
from __future__ import annotations
REQUIRES = 'REQUIRES'
NOT_HPI_MODULE_VAR = '__NOT_HPI_MODULE__'
@ -21,9 +19,8 @@ import ast
import logging
import os
import re
from collections.abc import Iterable, Sequence
from pathlib import Path
from typing import Any, NamedTuple, Optional, cast
from typing import Any, Iterable, List, NamedTuple, Optional, Sequence, cast
'''
None means that requirements weren't defined (different from empty requirements)
@ -33,11 +30,11 @@ Requires = Optional[Sequence[str]]
class HPIModule(NamedTuple):
name: str
skip_reason: str | None
doc: str | None = None
file: Path | None = None
skip_reason: Optional[str]
doc: Optional[str] = None
file: Optional[Path] = None
requires: Requires = None
legacy: str | None = None # contains reason/deprecation warning
legacy: Optional[str] = None # contains reason/deprecation warning
def ignored(m: str) -> bool:
@ -147,7 +144,7 @@ def all_modules() -> Iterable[HPIModule]:
def _iter_my_roots() -> Iterable[Path]:
import my # doesn't import any code, because of namespace package
paths: list[str] = list(my.__path__)
paths: List[str] = list(my.__path__)
if len(paths) == 0:
# should probably never happen?, if this code is running, it was imported
# because something was added to __path__ to match this name

View file

@ -3,16 +3,19 @@ Various error handling helpers
See https://beepb00p.xyz/mypy-error-handling.html#kiss for more detail
"""
from __future__ import annotations
import traceback
from collections.abc import Iterable, Iterator
from datetime import datetime
from itertools import tee
from typing import (
Any,
Callable,
Iterable,
Iterator,
List,
Literal,
Optional,
Tuple,
Type,
TypeVar,
Union,
cast,
@ -30,7 +33,7 @@ Res = ResT[T, Exception]
ErrorPolicy = Literal["yield", "raise", "drop"]
def notnone(x: T | None) -> T:
def notnone(x: Optional[T]) -> T:
assert x is not None
return x
@ -57,15 +60,13 @@ def raise_exceptions(itr: Iterable[Res[T]]) -> Iterator[T]:
yield o
def warn_exceptions(itr: Iterable[Res[T]], warn_func: Callable[[Exception], None] | None = None) -> Iterator[T]:
def warn_exceptions(itr: Iterable[Res[T]], warn_func: Optional[Callable[[Exception], None]] = None) -> Iterator[T]:
# if not provided, use the 'warnings' module
if warn_func is None:
from my.core.warnings import medium
def _warn_func(e: Exception) -> None:
# TODO: print traceback? but user could always --raise-exceptions as well
medium(str(e))
warn_func = _warn_func
for o in itr:
@ -80,7 +81,7 @@ def echain(ex: E, cause: Exception) -> E:
return ex
def split_errors(l: Iterable[ResT[T, E]], ET: type[E]) -> tuple[Iterable[T], Iterable[E]]:
def split_errors(l: Iterable[ResT[T, E]], ET: Type[E]) -> Tuple[Iterable[T], Iterable[E]]:
# TODO would be nice to have ET=Exception default? but it causes some mypy complaints?
vit, eit = tee(l)
# TODO ugh, not sure if I can reconcile type checking and runtime and convince mypy that ET and E are the same type?
@ -98,9 +99,7 @@ def split_errors(l: Iterable[ResT[T, E]], ET: type[E]) -> tuple[Iterable[T], Ite
K = TypeVar('K')
def sort_res_by(items: Iterable[Res[T]], key: Callable[[Any], K]) -> list[Res[T]]:
def sort_res_by(items: Iterable[Res[T]], key: Callable[[Any], K]) -> List[Res[T]]:
"""
Sort a sequence potentially interleaved with errors/entries on which the key can't be computed.
The general idea is: the error sticks to the non-error entry that follows it
@ -108,7 +107,7 @@ def sort_res_by(items: Iterable[Res[T]], key: Callable[[Any], K]) -> list[Res[T]
group = []
groups = []
for i in items:
k: K | None
k: Optional[K]
try:
k = key(i)
except Exception: # error white computing key? dunno, might be nice to handle...
@ -118,7 +117,7 @@ def sort_res_by(items: Iterable[Res[T]], key: Callable[[Any], K]) -> list[Res[T]
groups.append((k, group))
group = []
results: list[Res[T]] = []
results: List[Res[T]] = []
for _v, grp in sorted(groups, key=lambda p: p[0]): # type: ignore[return-value, arg-type] # TODO SupportsLessThan??
results.extend(grp)
results.extend(group) # handle last group (it will always be errors only)
@ -163,20 +162,20 @@ def test_sort_res_by() -> None:
# helpers to associate timestamps with the errors (so something meaningful could be displayed on the plots, for example)
# todo document it under 'patterns' somewhere...
# todo proper typevar?
def set_error_datetime(e: Exception, dt: datetime | None) -> None:
def set_error_datetime(e: Exception, dt: Optional[datetime]) -> None:
if dt is None:
return
e.args = (*e.args, dt)
# todo not sure if should return new exception?
def attach_dt(e: Exception, *, dt: datetime | None) -> Exception:
def attach_dt(e: Exception, *, dt: Optional[datetime]) -> Exception:
set_error_datetime(e, dt)
return e
# todo it might be problematic because might mess with timezones (when it's converted to string, it's converted to a shift)
def extract_error_datetime(e: Exception) -> datetime | None:
def extract_error_datetime(e: Exception) -> Optional[datetime]:
import re
for x in reversed(e.args):
@ -202,10 +201,10 @@ MODULE_SETUP_URL = 'https://github.com/karlicoss/HPI/blob/master/doc/SETUP.org#p
def warn_my_config_import_error(
err: ImportError | AttributeError,
err: Union[ImportError, AttributeError],
*,
help_url: str | None = None,
module_name: str | None = None,
help_url: Optional[str] = None,
module_name: Optional[str] = None,
) -> bool:
"""
If the user tried to import something from my.config but it failed,

View file

@ -1,8 +1,6 @@
from __future__ import annotations
import sys
import types
from typing import Any
from typing import Any, Dict, Optional
# The idea behind this one is to support accessing "overlaid/shadowed" modules from namespace packages
@ -22,7 +20,7 @@ def import_original_module(
file: str,
*,
star: bool = False,
globals: dict[str, Any] | None = None,
globals: Optional[Dict[str, Any]] = None,
) -> types.ModuleType:
module_to_restore = sys.modules[module_name]

View file

@ -1,29 +1,29 @@
from __future__ import annotations
from .internal import assert_subpackage; assert_subpackage(__name__)
from .internal import assert_subpackage
assert_subpackage(__name__)
import dataclasses
import dataclasses as dcl
import inspect
from typing import Any, Generic, TypeVar
from typing import Any, Type, TypeVar
D = TypeVar('D')
def _freeze_dataclass(Orig: type[D]):
ofields = [(f.name, f.type, f) for f in dataclasses.fields(Orig)] # type: ignore[arg-type] # see https://github.com/python/typing_extensions/issues/115
def _freeze_dataclass(Orig: Type[D]):
ofields = [(f.name, f.type, f) for f in dcl.fields(Orig)] # type: ignore[arg-type] # see https://github.com/python/typing_extensions/issues/115
# extract properties along with their types
props = list(inspect.getmembers(Orig, lambda o: isinstance(o, property)))
pfields = [(name, inspect.signature(getattr(prop, 'fget')).return_annotation) for name, prop in props]
# FIXME not sure about name?
# NOTE: sadly passing bases=[Orig] won't work, python won't let us override properties with fields
RRR = dataclasses.make_dataclass('RRR', fields=[*ofields, *pfields])
RRR = dcl.make_dataclass('RRR', fields=[*ofields, *pfields])
# todo maybe even declare as slots?
return props, RRR
# todo need some decorator thingie?
from typing import Generic
class Freezer(Generic[D]):
'''
Some magic which converts dataclass properties into fields.
@ -31,13 +31,13 @@ class Freezer(Generic[D]):
For now only supports dataclasses.
'''
def __init__(self, Orig: type[D]) -> None:
def __init__(self, Orig: Type[D]) -> None:
self.Orig = Orig
self.props, self.Frozen = _freeze_dataclass(Orig)
def freeze(self, value: D) -> D:
pvalues = {name: getattr(value, name) for name, _ in self.props}
return self.Frozen(**dataclasses.asdict(value), **pvalues) # type: ignore[call-overload] # see https://github.com/python/typing_extensions/issues/115
return self.Frozen(**dcl.asdict(value), **pvalues) # type: ignore[call-overload] # see https://github.com/python/typing_extensions/issues/115
### tests
@ -45,7 +45,7 @@ class Freezer(Generic[D]):
# this needs to be defined here to prevent a mypy bug
# see https://github.com/python/mypy/issues/7281
@dataclasses.dataclass
@dcl.dataclass
class _A:
x: Any
@ -71,7 +71,6 @@ def test_freezer() -> None:
assert fd['typed'] == 123
assert fd['untyped'] == [1, 2, 3]
###
# TODO shit. what to do with exceptions?

View file

@ -3,14 +3,11 @@ Contains various backwards compatibility/deprecation helpers relevant to HPI its
(as opposed to .compat module which implements compatibility between python versions)
"""
from __future__ import annotations
import inspect
import os
import re
from collections.abc import Iterator, Sequence
from types import ModuleType
from typing import TypeVar
from typing import Iterator, List, Optional, Sequence, TypeVar
from . import warnings
@ -18,7 +15,7 @@ from . import warnings
def handle_legacy_import(
parent_module_name: str,
legacy_submodule_name: str,
parent_module_path: list[str],
parent_module_path: List[str],
) -> bool:
###
# this is to trick mypy into treating this as a proper namespace package
@ -125,8 +122,8 @@ class always_supports_sequence(Iterator[V]):
def __init__(self, it: Iterator[V]) -> None:
self._it = it
self._list: list[V] | None = None
self._lit: Iterator[V] | None = None
self._list: Optional[List[V]] = None
self._lit: Optional[Iterator[V]] = None
def __iter__(self) -> Iterator[V]: # noqa: PYI034
if self._list is not None:
@ -145,7 +142,7 @@ class always_supports_sequence(Iterator[V]):
return getattr(self._it, name)
@property
def _aslist(self) -> list[V]:
def _aslist(self) -> List[V]:
if self._list is None:
qualname = getattr(self._it, '__qualname__', '<no qualname>') # defensive just in case
warnings.medium(f'Using {qualname} as list is deprecated. Migrate to iterative processing or call list() explicitly.')

View file

@ -2,14 +2,9 @@
TODO doesn't really belong to 'core' morally, but can think of moving out later
'''
from __future__ import annotations
from .internal import assert_subpackage; assert_subpackage(__name__)
from .internal import assert_subpackage
assert_subpackage(__name__)
from collections.abc import Iterable
from typing import Any
from typing import Any, Dict, Iterable, Optional
import click
@ -26,7 +21,7 @@ class config:
RESET_DEFAULT = False
def fill(it: Iterable[Any], *, measurement: str, reset: bool = RESET_DEFAULT, dt_col: str = 'dt') -> None:
def fill(it: Iterable[Any], *, measurement: str, reset: bool=RESET_DEFAULT, dt_col: str='dt') -> None:
# todo infer dt column automatically, reuse in stat?
# it doesn't like dots, ends up some syntax error?
measurement = measurement.replace('.', '_')
@ -35,7 +30,6 @@ def fill(it: Iterable[Any], *, measurement: str, reset: bool = RESET_DEFAULT, dt
db = config.db
from influxdb import InfluxDBClient # type: ignore
client = InfluxDBClient()
# todo maybe create if not exists?
# client.create_database(db)
@ -46,7 +40,7 @@ def fill(it: Iterable[Any], *, measurement: str, reset: bool = RESET_DEFAULT, dt
client.delete_series(database=db, measurement=measurement)
# TODO need to take schema here...
cache: dict[str, bool] = {}
cache: Dict[str, bool] = {}
def good(f, v) -> bool:
c = cache.get(f)
@ -65,7 +59,7 @@ def fill(it: Iterable[Any], *, measurement: str, reset: bool = RESET_DEFAULT, dt
def dit() -> Iterable[Json]:
for i in it:
d = asdict(i)
tags: Json | None = None
tags: Optional[Json] = None
tags_ = d.get('tags') # meh... handle in a more robust manner
if tags_ is not None and isinstance(tags_, dict): # FIXME meh.
del d['tags']
@ -90,7 +84,6 @@ def fill(it: Iterable[Any], *, measurement: str, reset: bool = RESET_DEFAULT, dt
}
from more_itertools import chunked
# "The optimal batch size is 5000 lines of line protocol."
# some chunking is def necessary, otherwise it fails
inserted = 0
@ -104,7 +97,7 @@ def fill(it: Iterable[Any], *, measurement: str, reset: bool = RESET_DEFAULT, dt
# todo "Specify timestamp precision when writing to InfluxDB."?
def magic_fill(it, *, name: str | None = None, reset: bool = RESET_DEFAULT) -> None:
def magic_fill(it, *, name: Optional[str]=None, reset: bool=RESET_DEFAULT) -> None:
if name is None:
assert callable(it) # generators have no name/module
name = f'{it.__module__}:{it.__name__}'
@ -116,7 +109,6 @@ def magic_fill(it, *, name: str | None = None, reset: bool = RESET_DEFAULT) -> N
from itertools import tee
from more_itertools import first, one
it, x = tee(it)
f = first(x, default=None)
if f is None:
@ -126,11 +118,9 @@ def magic_fill(it, *, name: str | None = None, reset: bool = RESET_DEFAULT) -> N
# TODO can we reuse pandas code or something?
#
from .pandas import _as_columns
schema = _as_columns(type(f))
from datetime import datetime
dtex = RuntimeError(f'expected single datetime field. schema: {schema}')
dtf = one((f for f, t in schema.items() if t == datetime), too_short=dtex, too_long=dtex)
@ -147,7 +137,6 @@ def main() -> None:
@click.argument('FUNCTION_NAME', type=str, required=True)
def populate(*, function_name: str, reset: bool) -> None:
from .__main__ import _locate_functions_or_prompt
[provider] = list(_locate_functions_or_prompt([function_name]))
# todo could have a non-interactive version which populates from all data sources for the provider?
magic_fill(provider, reset=reset)

View file

@ -19,7 +19,6 @@ def setup_config() -> None:
from pathlib import Path
from .preinit import get_mycfg_dir
mycfg_dir = get_mycfg_dir()
if not mycfg_dir.exists():
@ -44,7 +43,6 @@ See https://github.com/karlicoss/HPI/blob/master/doc/SETUP.org#setting-up-the-mo
except ImportError as ex:
# just in case... who knows what crazy setup users have
import logging
logging.exception(ex)
warnings.warn(f"""
Importing 'my.config' failed! (error: {ex}). This is likely to result in issues.

View file

@ -1,6 +1,4 @@
from .internal import assert_subpackage
assert_subpackage(__name__)
from .internal import assert_subpackage; assert_subpackage(__name__)
from . import warnings

View file

@ -5,21 +5,17 @@ This can potentially allow both for safer defensive parsing, and let you know if
TODO perhaps need to get some inspiration from linear logic to decide on a nice API...
'''
from __future__ import annotations
from collections import OrderedDict
from typing import Any
from typing import Any, List
def ignore(w, *keys):
for k in keys:
w[k].ignore()
def zoom(w, *keys):
return [w[k].zoom() for k in keys]
# TODO need to support lists
class Zoomable:
def __init__(self, parent, *args, **kwargs) -> None:
@ -44,7 +40,7 @@ class Zoomable:
assert self.parent is not None
self.parent._remove(self)
def zoom(self) -> Zoomable:
def zoom(self) -> 'Zoomable':
self.consume()
return self
@ -67,7 +63,6 @@ class Wdict(Zoomable, OrderedDict):
def this_consumed(self):
return len(self) == 0
# TODO specify mypy type for the index special method?
@ -82,7 +77,6 @@ class Wlist(Zoomable, list):
def this_consumed(self):
return len(self) == 0
class Wvalue(Zoomable):
def __init__(self, parent, value: Any) -> None:
super().__init__(parent)
@ -99,9 +93,12 @@ class Wvalue(Zoomable):
return 'WValue{' + repr(self.value) + '}'
def _wrap(j, parent=None) -> tuple[Zoomable, list[Zoomable]]:
from typing import Tuple
def _wrap(j, parent=None) -> Tuple[Zoomable, List[Zoomable]]:
res: Zoomable
cc: list[Zoomable]
cc: List[Zoomable]
if isinstance(j, dict):
res = Wdict(parent)
cc = [res]
@ -125,14 +122,13 @@ def _wrap(j, parent=None) -> tuple[Zoomable, list[Zoomable]]:
raise RuntimeError(f'Unexpected type: {type(j)} {j}')
from collections.abc import Iterator
from contextlib import contextmanager
from typing import Iterator
class UnconsumedError(Exception):
pass
# TODO think about error policy later...
@contextmanager
def wrap(j, *, throw=True) -> Iterator[Zoomable]:
@ -157,7 +153,6 @@ from typing import cast
def test_unconsumed() -> None:
import pytest
with pytest.raises(UnconsumedError):
with wrap({'a': 1234}) as w:
w = cast(Wdict, w)
@ -168,7 +163,6 @@ def test_unconsumed() -> None:
w = cast(Wdict, w)
d = w['c']['d'].zoom()
def test_consumed() -> None:
with wrap({'a': 1234}) as w:
w = cast(Wdict, w)
@ -179,7 +173,6 @@ def test_consumed() -> None:
c = w['c'].zoom()
d = c['d'].zoom()
def test_types() -> None:
# (string, number, object, array, boolean or nul
with wrap({'string': 'string', 'number': 3.14, 'boolean': True, 'null': None, 'list': [1, 2, 3]}) as w:
@ -191,7 +184,6 @@ def test_types() -> None:
for x in list(w['list'].zoom()): # TODO eh. how to avoid the extra list thing?
x.consume()
def test_consume_all() -> None:
with wrap({'aaa': {'bbb': {'hi': 123}}}) as w:
w = cast(Wdict, w)
@ -201,9 +193,11 @@ def test_consume_all() -> None:
def test_consume_few() -> None:
import pytest
pytest.skip('Will think about it later..')
with wrap({'important': 123, 'unimportant': 'whatever'}) as w:
with wrap({
'important': 123,
'unimportant': 'whatever'
}) as w:
w = cast(Wdict, w)
w['important'].zoom()
w.consume_all()
@ -212,7 +206,6 @@ def test_consume_few() -> None:
def test_zoom() -> None:
import pytest
with wrap({'aaa': 'whatever'}) as w:
w = cast(Wdict, w)
with pytest.raises(KeyError):
@ -236,7 +229,7 @@ def test_zoom() -> None:
# - very flexible, easy to adjust behaviour
# - cons:
# - can forget to assert about extra entities etc, so error prone
# - if we do something like =assert j.pop('status') == 200, j=, by the time assert happens we already popped item -- makes error handling harder
# - if we do something like =assert j.pop('status') == 200, j=, by the time assert happens we already popped item -- makes erro handling harder
# - a bit verbose.. so probably requires some helper functions though (could be much leaner than current konsume though)
# - if we assert, then terminates parsing too early, if we're defensive then inflates the code a lot with if statements
# - TODO perhaps combine warnings somehow or at least only emit once per module?

View file

@ -250,7 +250,7 @@ if __name__ == '__main__':
test()
## legacy/deprecated methods for backwards compatibility
## legacy/deprecated methods for backwards compatilibity
if not TYPE_CHECKING:
from .compat import deprecated

View file

@ -2,14 +2,11 @@
Utils for mime/filetype handling
"""
from __future__ import annotations
from .internal import assert_subpackage
assert_subpackage(__name__)
from .internal import assert_subpackage; assert_subpackage(__name__)
import functools
from pathlib import Path
from .common import PathIsh
@functools.lru_cache(1)
@ -26,7 +23,7 @@ import mimetypes # todo do I need init()?
# todo wtf? fastermime thinks it's mime is application/json even if the extension is xz??
# whereas magic detects correctly: application/x-zstd and application/x-xz
def fastermime(path: Path | str) -> str:
def fastermime(path: PathIsh) -> str:
paths = str(path)
# mimetypes is faster, so try it first
(mime, _) = mimetypes.guess_type(paths)

View file

@ -1,7 +1,6 @@
"""
Various helpers for reading org-mode data
"""
from datetime import datetime
@ -23,20 +22,17 @@ def parse_org_datetime(s: str) -> datetime:
# TODO I guess want to borrow inspiration from bs4? element type <-> tag; and similar logic for find_one, find_all
from collections.abc import Iterable
from typing import Callable, TypeVar
from typing import Callable, Iterable, TypeVar
from orgparse import OrgNode
V = TypeVar('V')
def collect(n: OrgNode, cfun: Callable[[OrgNode], Iterable[V]]) -> Iterable[V]:
yield from cfun(n)
for c in n.children:
yield from collect(c, cfun)
from more_itertools import one
from orgparse.extra import Table

View file

@ -7,14 +7,17 @@ from __future__ import annotations
# todo not sure if belongs to 'core'. It's certainly 'more' core than actual modules, but still not essential
# NOTE: this file is meant to be importable without Pandas installed
import dataclasses
from collections.abc import Iterable, Iterator
from datetime import datetime, timezone
from pprint import pformat
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
Iterator,
Literal,
Type,
TypeVar,
)
@ -175,7 +178,7 @@ def _to_jsons(it: Iterable[Res[Any]]) -> Iterable[Json]:
Schema = Any
def _as_columns(s: Schema) -> dict[str, type]:
def _as_columns(s: Schema) -> Dict[str, Type]:
# todo would be nice to extract properties; add tests for this as well
if dataclasses.is_dataclass(s):
return {f.name: f.type for f in dataclasses.fields(s)} # type: ignore[misc] # ugh, why mypy thinks f.type can return str??

View file

@ -8,7 +8,6 @@ def get_mycfg_dir() -> Path:
import os
import appdirs # type: ignore[import-untyped]
# not sure if that's necessary, i.e. could rely on PYTHONPATH instead
# on the other hand, by using MY_CONFIG we are guaranteed to load it from the desired path?
mvar = os.environ.get('MY_CONFIG')

View file

@ -2,9 +2,7 @@
Helpers to prevent depending on pytest in runtime
"""
from .internal import assert_subpackage
assert_subpackage(__name__)
from .internal import assert_subpackage; assert_subpackage(__name__)
import sys
import typing

View file

@ -5,20 +5,23 @@ The main entrypoint to this library is the 'select' function below; try:
python3 -c "from my.core.query import select; help(select)"
"""
from __future__ import annotations
import dataclasses
import importlib
import inspect
import itertools
from collections.abc import Iterable, Iterator
from datetime import datetime
from typing import (
Any,
Callable,
Dict,
Iterable,
Iterator,
List,
NamedTuple,
Optional,
Tuple,
TypeVar,
Union,
)
import more_itertools
@ -48,7 +51,6 @@ class Unsortable(NamedTuple):
class QueryException(ValueError):
"""Used to differentiate query-related errors, so the CLI interface is more expressive"""
pass
@ -61,7 +63,7 @@ def locate_function(module_name: str, function_name: str) -> Callable[[], Iterab
"""
try:
mod = importlib.import_module(module_name)
for fname, f in inspect.getmembers(mod, inspect.isfunction):
for (fname, f) in inspect.getmembers(mod, inspect.isfunction):
if fname == function_name:
return f
# in case the function is defined dynamically,
@ -81,10 +83,10 @@ def locate_qualified_function(qualified_name: str) -> Callable[[], Iterable[ET]]
if "." not in qualified_name:
raise QueryException("Could not find a '.' in the function name, e.g. my.reddit.rexport.comments")
rdot_index = qualified_name.rindex(".")
return locate_function(qualified_name[:rdot_index], qualified_name[rdot_index + 1 :])
return locate_function(qualified_name[:rdot_index], qualified_name[rdot_index + 1:])
def attribute_func(obj: T, where: Where, default: U | None = None) -> OrderFunc | None:
def attribute_func(obj: T, where: Where, default: Optional[U] = None) -> Optional[OrderFunc]:
"""
Attempts to find an attribute which matches the 'where_function' on the object,
using some getattr/dict checks. Returns a function which when called with
@ -131,11 +133,11 @@ def attribute_func(obj: T, where: Where, default: U | None = None) -> OrderFunc
def _generate_order_by_func(
obj_res: Res[T],
*,
key: str | None = None,
where_function: Where | None = None,
default: U | None = None,
key: Optional[str] = None,
where_function: Optional[Where] = None,
default: Optional[U] = None,
force_unsortable: bool = False,
) -> OrderFunc | None:
) -> Optional[OrderFunc]:
"""
Accepts an object Res[T] (Instance of some class or Exception)
@ -200,7 +202,7 @@ pass 'drop_exceptions' to ignore exceptions""")
# user must provide either a key or a where predicate
if where_function is not None:
func: OrderFunc | None = attribute_func(obj, where_function, default)
func: Optional[OrderFunc] = attribute_func(obj, where_function, default)
if func is not None:
return func
@ -216,6 +218,8 @@ pass 'drop_exceptions' to ignore exceptions""")
return None # couldn't compute a OrderFunc for this class/instance
# currently using the 'key set' as a proxy for 'this is the same type of thing'
def _determine_order_by_value_key(obj_res: ET) -> Any:
"""
@ -240,7 +244,7 @@ def _drop_unsorted(itr: Iterator[ET], orderfunc: OrderFunc) -> Iterator[ET]:
# try getting the first value from the iterator
# similar to my.core.common.warn_if_empty? this doesn't go through the whole iterator though
def _peek_iter(itr: Iterator[ET]) -> tuple[ET | None, Iterator[ET]]:
def _peek_iter(itr: Iterator[ET]) -> Tuple[Optional[ET], Iterator[ET]]:
itr = more_itertools.peekable(itr)
try:
first_item = itr.peek()
@ -251,9 +255,9 @@ def _peek_iter(itr: Iterator[ET]) -> tuple[ET | None, Iterator[ET]]:
# similar to 'my.core.error.sort_res_by'?
def _wrap_unsorted(itr: Iterator[ET], orderfunc: OrderFunc) -> tuple[Iterator[Unsortable], Iterator[ET]]:
unsortable: list[Unsortable] = []
sortable: list[ET] = []
def _wrap_unsorted(itr: Iterator[ET], orderfunc: OrderFunc) -> Tuple[Iterator[Unsortable], Iterator[ET]]:
unsortable: List[Unsortable] = []
sortable: List[ET] = []
for o in itr:
# if input to select was another select
if isinstance(o, Unsortable):
@ -275,7 +279,7 @@ def _handle_unsorted(
orderfunc: OrderFunc,
drop_unsorted: bool,
wrap_unsorted: bool
) -> tuple[Iterator[Unsortable], Iterator[ET]]:
) -> Tuple[Iterator[Unsortable], Iterator[ET]]:
# prefer drop_unsorted to wrap_unsorted, if both were present
if drop_unsorted:
return iter([]), _drop_unsorted(itr, orderfunc)
@ -290,16 +294,16 @@ def _handle_unsorted(
# different types. ***This consumes the iterator***, so
# you should definitely itertoolts.tee it beforehand
# as to not exhaust the values
def _generate_order_value_func(itr: Iterator[ET], order_value: Where, default: U | None = None) -> OrderFunc:
def _generate_order_value_func(itr: Iterator[ET], order_value: Where, default: Optional[U] = None) -> OrderFunc:
# TODO: add a kwarg to force lookup for every item? would sort of be like core.common.guess_datetime then
order_by_lookup: dict[Any, OrderFunc] = {}
order_by_lookup: Dict[Any, OrderFunc] = {}
# need to go through a copy of the whole iterator here to
# pre-generate functions to support sorting mixed types
for obj_res in itr:
key: Any = _determine_order_by_value_key(obj_res)
if key not in order_by_lookup:
keyfunc: OrderFunc | None = _generate_order_by_func(
keyfunc: Optional[OrderFunc] = _generate_order_by_func(
obj_res,
where_function=order_value,
default=default,
@ -320,12 +324,12 @@ def _generate_order_value_func(itr: Iterator[ET], order_value: Where, default: U
def _handle_generate_order_by(
itr,
*,
order_by: OrderFunc | None = None,
order_key: str | None = None,
order_value: Where | None = None,
default: U | None = None,
) -> tuple[OrderFunc | None, Iterator[ET]]:
order_by_chosen: OrderFunc | None = order_by # if the user just supplied a function themselves
order_by: Optional[OrderFunc] = None,
order_key: Optional[str] = None,
order_value: Optional[Where] = None,
default: Optional[U] = None,
) -> Tuple[Optional[OrderFunc], Iterator[ET]]:
order_by_chosen: Optional[OrderFunc] = order_by # if the user just supplied a function themselves
if order_by is not None:
return order_by, itr
if order_key is not None:
@ -350,19 +354,19 @@ def _handle_generate_order_by(
def select(
src: Iterable[ET] | Callable[[], Iterable[ET]],
src: Union[Iterable[ET], Callable[[], Iterable[ET]]],
*,
where: Where | None = None,
order_by: OrderFunc | None = None,
order_key: str | None = None,
order_value: Where | None = None,
default: U | None = None,
where: Optional[Where] = None,
order_by: Optional[OrderFunc] = None,
order_key: Optional[str] = None,
order_value: Optional[Where] = None,
default: Optional[U] = None,
reverse: bool = False,
limit: int | None = None,
limit: Optional[int] = None,
drop_unsorted: bool = False,
wrap_unsorted: bool = True,
warn_exceptions: bool = False,
warn_func: Callable[[Exception], None] | None = None,
warn_func: Optional[Callable[[Exception], None]] = None,
drop_exceptions: bool = False,
raise_exceptions: bool = False,
) -> Iterator[ET]:
@ -613,7 +617,7 @@ class _B(NamedTuple):
# move these to tests/? They are re-used so much in the tests below,
# not sure where the best place for these is
def _mixed_iter() -> Iterator[_A | _B]:
def _mixed_iter() -> Iterator[Union[_A, _B]]:
yield _A(x=datetime(year=2009, month=5, day=10, hour=4, minute=10, second=1), y=5, z=10)
yield _B(y=datetime(year=2015, month=5, day=10, hour=4, minute=10, second=1))
yield _A(x=datetime(year=2005, month=5, day=10, hour=4, minute=10, second=1), y=10, z=2)
@ -622,7 +626,7 @@ def _mixed_iter() -> Iterator[_A | _B]:
yield _A(x=datetime(year=2005, month=4, day=10, hour=4, minute=10, second=1), y=2, z=-5)
def _mixed_iter_errors() -> Iterator[Res[_A | _B]]:
def _mixed_iter_errors() -> Iterator[Res[Union[_A, _B]]]:
m = _mixed_iter()
yield from itertools.islice(m, 0, 3)
yield RuntimeError("Unhandled error!")

View file

@ -7,14 +7,11 @@ filtered iterator
See the select_range function below
"""
from __future__ import annotations
import re
import time
from collections.abc import Iterator
from datetime import date, datetime, timedelta
from functools import cache
from typing import Any, Callable, NamedTuple
from functools import lru_cache
from typing import Any, Callable, Iterator, NamedTuple, Optional, Type
import more_itertools
@ -28,9 +25,7 @@ from .query import (
select,
)
timedelta_regex = re.compile(
r"^((?P<weeks>[\.\d]+?)w)?((?P<days>[\.\d]+?)d)?((?P<hours>[\.\d]+?)h)?((?P<minutes>[\.\d]+?)m)?((?P<seconds>[\.\d]+?)s)?$"
)
timedelta_regex = re.compile(r"^((?P<weeks>[\.\d]+?)w)?((?P<days>[\.\d]+?)d)?((?P<hours>[\.\d]+?)h)?((?P<minutes>[\.\d]+?)m)?((?P<seconds>[\.\d]+?)s)?$")
# https://stackoverflow.com/a/51916936
@ -93,7 +88,7 @@ def parse_datetime_float(date_str: str) -> float:
# dateparser is a bit more lenient than the above, lets you type
# all sorts of dates as inputs
# https://github.com/scrapinghub/dateparser#how-to-use
res: datetime | None = dateparser.parse(ds, settings={"DATE_ORDER": "YMD"})
res: Optional[datetime] = dateparser.parse(ds, settings={"DATE_ORDER": "YMD"})
if res is not None:
return res.timestamp()
@ -103,7 +98,7 @@ def parse_datetime_float(date_str: str) -> float:
# probably DateLike input? but a user could specify an order_key
# which is an epoch timestamp or a float value which they
# expect to be converted to a datetime to compare
@cache
@lru_cache(maxsize=None)
def _datelike_to_float(dl: Any) -> float:
if isinstance(dl, datetime):
return dl.timestamp()
@ -135,12 +130,11 @@ class RangeTuple(NamedTuple):
of the timeframe -- 'before'
- before and after - anything after 'after' and before 'before', acts as a time range
"""
# technically doesn't need to be Optional[Any],
# just to make it more clear these can be None
after: Any | None
before: Any | None
within: Any | None
after: Optional[Any]
before: Optional[Any]
within: Optional[Any]
Converter = Callable[[Any], Any]
@ -151,9 +145,9 @@ def _parse_range(
unparsed_range: RangeTuple,
end_parser: Converter,
within_parser: Converter,
parsed_range: RangeTuple | None = None,
error_message: str | None = None,
) -> RangeTuple | None:
parsed_range: Optional[RangeTuple] = None,
error_message: Optional[str] = None
) -> Optional[RangeTuple]:
if parsed_range is not None:
return parsed_range
@ -182,11 +176,11 @@ def _create_range_filter(
end_parser: Converter,
within_parser: Converter,
attr_func: Where,
parsed_range: RangeTuple | None = None,
default_before: Any | None = None,
value_coercion_func: Converter | None = None,
error_message: str | None = None,
) -> Where | None:
parsed_range: Optional[RangeTuple] = None,
default_before: Optional[Any] = None,
value_coercion_func: Optional[Converter] = None,
error_message: Optional[str] = None,
) -> Optional[Where]:
"""
Handles:
- parsing the user input into values that are comparable to items the iterable returns
@ -278,17 +272,17 @@ def _create_range_filter(
def select_range(
itr: Iterator[ET],
*,
where: Where | None = None,
order_key: str | None = None,
order_value: Where | None = None,
order_by_value_type: type | None = None,
unparsed_range: RangeTuple | None = None,
where: Optional[Where] = None,
order_key: Optional[str] = None,
order_value: Optional[Where] = None,
order_by_value_type: Optional[Type] = None,
unparsed_range: Optional[RangeTuple] = None,
reverse: bool = False,
limit: int | None = None,
limit: Optional[int] = None,
drop_unsorted: bool = False,
wrap_unsorted: bool = False,
warn_exceptions: bool = False,
warn_func: Callable[[Exception], None] | None = None,
warn_func: Optional[Callable[[Exception], None]] = None,
drop_exceptions: bool = False,
raise_exceptions: bool = False,
) -> Iterator[ET]:
@ -323,10 +317,9 @@ def select_range(
drop_exceptions=drop_exceptions,
raise_exceptions=raise_exceptions,
warn_exceptions=warn_exceptions,
warn_func=warn_func,
)
warn_func=warn_func)
order_by_chosen: OrderFunc | None = None
order_by_chosen: Optional[OrderFunc] = None
# if the user didn't specify an attribute to order value, but specified a type
# we should search for on each value in the iterator
@ -337,8 +330,6 @@ def select_range(
# if the user supplied a order_key, and/or we've generated an order_value, create
# the function that accesses that type on each value in the iterator
if order_key is not None or order_value is not None:
# _generate_order_value_func internally here creates a copy of the iterator, which has to
# be consumed in-case we're sorting by mixed types
order_by_chosen, itr = _handle_generate_order_by(itr, order_key=order_key, order_value=order_value)
# signifies that itr is empty -- can early return here
if order_by_chosen is None:
@ -354,7 +345,7 @@ Specify a type or a key to order the value by""")
# force drop_unsorted=True so we can use _create_range_filter
# sort the iterable by the generated order_by_chosen function
itr = select(itr, order_by=order_by_chosen, drop_unsorted=True)
filter_func: Where | None
filter_func: Optional[Where]
if order_by_value_type in [datetime, date]:
filter_func = _create_range_filter(
unparsed_range=unparsed_range,
@ -362,8 +353,7 @@ Specify a type or a key to order the value by""")
within_parser=parse_timedelta_float,
attr_func=order_by_chosen, # type: ignore[arg-type]
default_before=time.time(),
value_coercion_func=_datelike_to_float,
)
value_coercion_func=_datelike_to_float)
elif order_by_value_type in [int, float]:
# allow primitives to be converted using the default int(), float() callables
filter_func = _create_range_filter(
@ -372,8 +362,7 @@ Specify a type or a key to order the value by""")
within_parser=order_by_value_type,
attr_func=order_by_chosen, # type: ignore[arg-type]
default_before=None,
value_coercion_func=order_by_value_type,
)
value_coercion_func=order_by_value_type)
else:
# TODO: add additional kwargs to let the user sort by other values, by specifying the parsers?
# would need to allow passing the end_parser, within parser, default before and value_coercion_func...
@ -400,7 +389,7 @@ Specify a type or a key to order the value by""")
return itr
# reuse items from query for testing
# re-use items from query for testing
from .query import _A, _B, _Float, _mixed_iter_errors
@ -481,7 +470,7 @@ def test_range_predicate() -> None:
# filter from 0 to 5
rn: RangeTuple = RangeTuple("0", "5", None)
zero_to_five_filter: Where | None = int_filter_func(unparsed_range=rn)
zero_to_five_filter: Optional[Where] = int_filter_func(unparsed_range=rn)
assert zero_to_five_filter is not None
# this is just a Where function, given some input it return True/False if the value is allowed
assert zero_to_five_filter(3) is True
@ -494,7 +483,6 @@ def test_range_predicate() -> None:
rn = RangeTuple(None, 3, "3.5")
assert list(filter(int_filter_func(unparsed_range=rn, attr_func=identity), src())) == ["0", "1", "2"]
def test_parse_range() -> None:
from functools import partial

View file

@ -1,11 +1,9 @@
from __future__ import annotations
import datetime
from dataclasses import asdict, is_dataclass
from decimal import Decimal
from functools import cache
from functools import lru_cache
from pathlib import Path
from typing import Any, Callable, NamedTuple
from typing import Any, Callable, NamedTuple, Optional
from .error import error_to_json
from .pytest import parametrize
@ -59,12 +57,12 @@ def _default_encode(obj: Any) -> Any:
# could possibly run multiple times/raise warning if you provide different 'default'
# functions or change the kwargs? The alternative is to maintain all of this at the module
# level, which is just as annoying
@cache
@lru_cache(maxsize=None)
def _dumps_factory(**kwargs) -> Callable[[Any], str]:
use_default: DefaultEncoder = _default_encode
# if the user passed an additional 'default' parameter,
# try using that to serialize before before _default_encode
_additional_default: DefaultEncoder | None = kwargs.get("default")
_additional_default: Optional[DefaultEncoder] = kwargs.get("default")
if _additional_default is not None and callable(_additional_default):
def wrapped_default(obj: Any) -> Any:
@ -80,9 +78,9 @@ def _dumps_factory(**kwargs) -> Callable[[Any], str]:
kwargs["default"] = use_default
prefer_factory: str | None = kwargs.pop('_prefer_factory', None)
prefer_factory: Optional[str] = kwargs.pop('_prefer_factory', None)
def orjson_factory() -> Dumps | None:
def orjson_factory() -> Optional[Dumps]:
try:
import orjson
except ModuleNotFoundError:
@ -97,7 +95,7 @@ def _dumps_factory(**kwargs) -> Callable[[Any], str]:
return _orjson_dumps
def simplejson_factory() -> Dumps | None:
def simplejson_factory() -> Optional[Dumps]:
try:
from simplejson import dumps as simplejson_dumps
except ModuleNotFoundError:
@ -117,7 +115,7 @@ def _dumps_factory(**kwargs) -> Callable[[Any], str]:
return _simplejson_dumps
def stdlib_factory() -> Dumps | None:
def stdlib_factory() -> Optional[Dumps]:
import json
from .warnings import high
@ -152,7 +150,7 @@ def _dumps_factory(**kwargs) -> Callable[[Any], str]:
def dumps(
obj: Any,
default: DefaultEncoder | None = None,
default: Optional[DefaultEncoder] = None,
**kwargs,
) -> str:
"""

View file

@ -3,12 +3,9 @@ Decorator to gracefully handle importing a data source, or warning
and yielding nothing (or a default) when its not available
"""
from __future__ import annotations
import warnings
from collections.abc import Iterable, Iterator
from functools import wraps
from typing import Any, Callable, TypeVar
from typing import Any, Callable, Iterable, Iterator, Optional, TypeVar
from .warnings import medium
@ -29,8 +26,8 @@ _DEFAULT_ITR = ()
def import_source(
*,
default: Iterable[T] = _DEFAULT_ITR,
module_name: str | None = None,
help_url: str | None = None,
module_name: Optional[str] = None,
help_url: Optional[str] = None,
) -> Callable[..., Callable[..., Iterator[T]]]:
"""
doesn't really play well with types, but is used to catch
@ -53,7 +50,6 @@ def import_source(
except (ImportError, AttributeError) as err:
from . import core_config as CC
from .error import warn_my_config_import_error
suppressed_in_conf = False
if module_name is not None and CC.config._is_module_active(module_name) is False:
suppressed_in_conf = True
@ -76,7 +72,5 @@ class core:
if not matched_config_err and isinstance(err, AttributeError):
raise err
yield from default
return wrapper
return decorator

View file

@ -1,16 +1,12 @@
from __future__ import annotations
from .internal import assert_subpackage; assert_subpackage(__name__)
from .internal import assert_subpackage # noqa: I001
assert_subpackage(__name__)
import shutil
import sqlite3
from collections.abc import Iterator
from contextlib import contextmanager
from pathlib import Path
from tempfile import TemporaryDirectory
from typing import Any, Callable, Literal, Union, overload
from typing import Any, Callable, Iterator, Literal, Optional, Tuple, Union, overload
from .common import PathIsh
from .compat import assert_never
@ -26,7 +22,6 @@ def test_sqlite_connect_immutable(tmp_path: Path) -> None:
conn.execute('CREATE TABLE testtable (col)')
import pytest
with pytest.raises(sqlite3.OperationalError, match='readonly database'):
with sqlite_connect_immutable(db) as conn:
conn.execute('DROP TABLE testtable')
@ -38,7 +33,6 @@ def test_sqlite_connect_immutable(tmp_path: Path) -> None:
SqliteRowFactory = Callable[[sqlite3.Cursor, sqlite3.Row], Any]
def dict_factory(cursor, row):
fields = [column[0] for column in cursor.description]
return dict(zip(fields, row))
@ -46,9 +40,8 @@ def dict_factory(cursor, row):
Factory = Union[SqliteRowFactory, Literal['row', 'dict']]
@contextmanager
def sqlite_connection(db: PathIsh, *, immutable: bool = False, row_factory: Factory | None = None) -> Iterator[sqlite3.Connection]:
def sqlite_connection(db: PathIsh, *, immutable: bool=False, row_factory: Optional[Factory]=None) -> Iterator[sqlite3.Connection]:
dbp = f'file:{db}'
# https://www.sqlite.org/draft/uri.html#uriimmutable
if immutable:
@ -104,76 +97,31 @@ def sqlite_copy_and_open(db: PathIsh) -> sqlite3.Connection:
# and then the return type ends up as Iterator[Tuple[str, ...]], which isn't desirable :(
# a bit annoying to have this copy-pasting, but hopefully not a big issue
# fmt: off
@overload
def select(cols: tuple[str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[tuple[Any ]]: ...
def select(cols: Tuple[str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[Tuple[Any ]]: ...
@overload
def select(cols: tuple[str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[tuple[Any, Any ]]: ...
def select(cols: Tuple[str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[Tuple[Any, Any ]]: ...
@overload
def select(cols: tuple[str, str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[tuple[Any, Any, Any ]]: ...
def select(cols: Tuple[str, str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[Tuple[Any, Any, Any ]]: ...
@overload
def select(cols: tuple[str, str, str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[tuple[Any, Any, Any, Any ]]: ...
def select(cols: Tuple[str, str, str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[Tuple[Any, Any, Any, Any ]]: ...
@overload
def select(cols: tuple[str, str, str, str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[tuple[Any, Any, Any, Any, Any ]]: ...
def select(cols: Tuple[str, str, str, str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[Tuple[Any, Any, Any, Any, Any ]]: ...
@overload
def select(cols: tuple[str, str, str, str, str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[tuple[Any, Any, Any, Any, Any, Any ]]: ...
def select(cols: Tuple[str, str, str, str, str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[Tuple[Any, Any, Any, Any, Any, Any ]]: ...
@overload
def select(cols: tuple[str, str, str, str, str, str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[tuple[Any, Any, Any, Any, Any, Any, Any ]]: ...
def select(cols: Tuple[str, str, str, str, str, str, str ], rest: str, *, db: sqlite3.Connection) -> \
Iterator[Tuple[Any, Any, Any, Any, Any, Any, Any ]]: ...
@overload
def select(cols: tuple[str, str, str, str, str, str, str, str], rest: str, *, db: sqlite3.Connection) -> \
Iterator[tuple[Any, Any, Any, Any, Any, Any, Any, Any]]: ...
# fmt: on
def select(cols: Tuple[str, str, str, str, str, str, str, str], rest: str, *, db: sqlite3.Connection) -> \
Iterator[Tuple[Any, Any, Any, Any, Any, Any, Any, Any]]: ...
def select(cols, rest, *, db):
# db arg is last cause that results in nicer code formatting..
return db.execute('SELECT ' + ','.join(cols) + ' ' + rest)
class SqliteTool:
def __init__(self, connection: sqlite3.Connection) -> None:
self.connection = connection
def _get_sqlite_master(self) -> dict[str, str]:
res = {}
for c in self.connection.execute('SELECT name, type FROM sqlite_master'):
[name, type_] = c
assert type_ in {'table', 'index', 'view', 'trigger'}, (name, type_) # just in case
res[name] = type_
return res
def get_table_names(self) -> list[str]:
master = self._get_sqlite_master()
res = []
for name, type_ in master.items():
if type_ != 'table':
continue
res.append(name)
return res
def get_table_schema(self, name: str) -> dict[str, str]:
"""
Returns map from column name to column type
NOTE: Sometimes this doesn't work if the db has some extensions (e.g. happens for facebook apps)
In this case you might still be able to use get_table_names
"""
schema: dict[str, str] = {}
for row in self.connection.execute(f'PRAGMA table_info(`{name}`)'):
col = row[1]
type_ = row[2]
# hmm, somewhere between 3.34.1 and 3.37.2, sqlite started normalising type names to uppercase
# let's do this just in case since python < 3.10 are using the old version
# e.g. it could have returned 'blob' and that would confuse blob check (see _check_allowed_blobs)
type_ = type_.upper()
schema[col] = type_
return schema
def get_table_schemas(self) -> dict[str, dict[str, str]]:
return {name: self.get_table_schema(name) for name in self.get_table_names()}

View file

@ -2,13 +2,10 @@
Helpers for hpi doctor/stats functionality.
'''
from __future__ import annotations
import collections.abc
import importlib
import inspect
import typing
from collections.abc import Iterable, Iterator, Sequence
from contextlib import contextmanager
from datetime import datetime
from pathlib import Path
@ -16,13 +13,20 @@ from types import ModuleType
from typing import (
Any,
Callable,
Dict,
Iterable,
Iterator,
List,
Optional,
Protocol,
Sequence,
Union,
cast,
)
from .types import asdict
Stats = dict[str, Any]
Stats = Dict[str, Any]
class StatsFun(Protocol):
@ -51,10 +55,10 @@ def quick_stats():
def stat(
func: Callable[[], Iterable[Any]] | Iterable[Any],
func: Union[Callable[[], Iterable[Any]], Iterable[Any]],
*,
quick: bool = False,
name: str | None = None,
name: Optional[str] = None,
) -> Stats:
"""
Extracts various statistics from a passed iterable/callable, e.g.:
@ -149,8 +153,8 @@ def test_stat() -> None:
#
def get_stats(module_name: str, *, guess: bool = False) -> StatsFun | None:
stats: StatsFun | None = None
def get_stats(module_name: str, *, guess: bool = False) -> Optional[StatsFun]:
stats: Optional[StatsFun] = None
try:
module = importlib.import_module(module_name)
except Exception:
@ -163,7 +167,7 @@ def get_stats(module_name: str, *, guess: bool = False) -> StatsFun | None:
# TODO maybe could be enough to annotate OUTPUTS or something like that?
# then stats could just use them as hints?
def guess_stats(module: ModuleType) -> StatsFun | None:
def guess_stats(module: ModuleType) -> Optional[StatsFun]:
"""
If the module doesn't have explicitly defined 'stat' function,
this is used to try to guess what could be included in stats automatically
@ -202,7 +206,7 @@ def test_guess_stats() -> None:
}
def _guess_data_providers(module: ModuleType) -> dict[str, Callable]:
def _guess_data_providers(module: ModuleType) -> Dict[str, Callable]:
mfunctions = inspect.getmembers(module, inspect.isfunction)
return {k: v for k, v in mfunctions if is_data_provider(v)}
@ -259,7 +263,7 @@ def test_is_data_provider() -> None:
lam = lambda: [1, 2]
assert not idp(lam)
def has_extra_args(count) -> list[int]:
def has_extra_args(count) -> List[int]:
return list(range(count))
assert not idp(has_extra_args)
@ -336,10 +340,10 @@ def test_type_is_iterable() -> None:
assert not fun(None)
assert not fun(int)
assert not fun(Any)
assert not fun(dict[int, int])
assert not fun(Dict[int, int])
assert fun(list[int])
assert fun(Sequence[dict[str, str]])
assert fun(List[int])
assert fun(Sequence[Dict[str, str]])
assert fun(Iterable[Any])
@ -430,7 +434,7 @@ def test_stat_iterable() -> None:
# experimental, not sure about it..
def _guess_datetime(x: Any) -> datetime | None:
def _guess_datetime(x: Any) -> Optional[datetime]:
# todo hmm implement without exception..
try:
d = asdict(x)

View file

@ -1,5 +1,3 @@
from __future__ import annotations
import atexit
import os
import shutil
@ -7,9 +5,9 @@ import sys
import tarfile
import tempfile
import zipfile
from collections.abc import Generator, Sequence
from contextlib import contextmanager
from pathlib import Path
from typing import Generator, List, Sequence, Tuple, Union
from .logging import make_logger
@ -44,10 +42,10 @@ TARGZ_EXT = {".tar.gz"}
@contextmanager
def match_structure(
base: Path,
expected: str | Sequence[str],
expected: Union[str, Sequence[str]],
*,
partial: bool = False,
) -> Generator[tuple[Path, ...], None, None]:
) -> Generator[Tuple[Path, ...], None, None]:
"""
Given a 'base' directory or archive (zip/tar.gz), recursively search for one or more paths that match the
pattern described in 'expected'. That can be a single string, or a list
@ -142,8 +140,8 @@ def match_structure(
if not searchdir.is_dir():
raise NotADirectoryError(f"Expected either a zip/tar.gz archive or a directory, received {searchdir}")
matches: list[Path] = []
possible_targets: list[Path] = [searchdir]
matches: List[Path] = []
possible_targets: List[Path] = [searchdir]
while len(possible_targets) > 0:
p = possible_targets.pop(0)
@ -174,7 +172,7 @@ def warn_leftover_files() -> None:
from . import core_config as CC
base_tmp: Path = CC.config.get_tmp_dir()
leftover: list[Path] = list(base_tmp.iterdir())
leftover: List[Path] = list(base_tmp.iterdir())
if leftover:
logger.debug(f"at exit warning: Found leftover files in temporary directory '{leftover}'. this may be because you have multiple hpi processes running -- if so this can be ignored")

View file

@ -2,11 +2,11 @@
Helper 'module' for test_guess_stats
"""
from collections.abc import Iterable, Iterator, Sequence
from contextlib import contextmanager
from dataclasses import dataclass
from datetime import datetime, timedelta
from pathlib import Path
from typing import Iterable, Iterator, Sequence
@dataclass

View file

@ -1,8 +1,6 @@
from __future__ import annotations
import os
from collections.abc import Iterator
from contextlib import contextmanager
from typing import Iterator, Optional
import pytest
@ -17,7 +15,7 @@ skip_if_uses_optional_deps = pytest.mark.skipif(
# TODO maybe move to hpi core?
@contextmanager
def tmp_environ_set(key: str, value: str | None) -> Iterator[None]:
def tmp_environ_set(key: str, value: Optional[str]) -> Iterator[None]:
prev_value = os.environ.get(key)
if value is None:
os.environ.pop(key, None)

View file

@ -1,9 +1,8 @@
import json
import warnings
from collections.abc import Iterator
from datetime import datetime
from pathlib import Path
from typing import NamedTuple
from typing import Iterator, NamedTuple
from ..denylist import DenyList

View file

@ -1,7 +1,7 @@
from __future__ import annotations
from .common import skip_if_uses_optional_deps as pytestmark
from typing import List
# TODO ugh, this is very messy.. need to sort out config overriding here
@ -16,7 +16,7 @@ def test_cachew() -> None:
# TODO ugh. need doublewrap or something to avoid having to pass parens
@mcachew()
def cf() -> list[int]:
def cf() -> List[int]:
nonlocal called
called += 1
return [1, 2, 3]
@ -43,7 +43,7 @@ def test_cachew_dir_none() -> None:
called = 0
@mcachew(cache_path=cache_dir() / 'ctest')
def cf() -> list[int]:
def cf() -> List[int]:
nonlocal called
called += 1
return [called, called, called]

View file

@ -2,8 +2,8 @@
Various tests that are checking behaviour of user config wrt to various things
"""
import os
import sys
import os
from pathlib import Path
import pytest

View file

@ -12,7 +12,7 @@ def _init_default_config() -> None:
def test_tmp_config() -> None:
## ugh. ideally this would be on the top level (would be a better test)
## but pytest imports everything first, executes hooks, and some reset_modules() fictures mess stuff up
## but pytest imports eveything first, executes hooks, and some reset_modules() fictures mess stuff up
## later would be nice to be a bit more careful about them
_init_default_config()
from my.simple import items

View file

@ -1,7 +1,5 @@
from __future__ import annotations
from collections.abc import Sequence
from functools import cache, lru_cache
from functools import lru_cache
from typing import Dict, Sequence
import pytz
@ -13,7 +11,6 @@ def user_forced() -> Sequence[str]:
# https://stackoverflow.com/questions/36067621/python-all-possible-timezone-abbreviations-for-given-timezone-name-and-vise-ve
try:
from my.config import time as user_config
return user_config.tz.force_abbreviations # type: ignore[attr-defined] # noqa: TRY300
# note: noqa since we're catching case where config doesn't have attribute here as well
except:
@ -22,12 +19,12 @@ def user_forced() -> Sequence[str]:
@lru_cache(1)
def _abbr_to_timezone_map() -> dict[str, pytz.BaseTzInfo]:
def _abbr_to_timezone_map() -> Dict[str, pytz.BaseTzInfo]:
# also force UTC to always correspond to utc
# this makes more sense than Zulu it ends up by default
timezones = [*pytz.all_timezones, 'UTC', *user_forced()]
res: dict[str, pytz.BaseTzInfo] = {}
res: Dict[str, pytz.BaseTzInfo] = {}
for tzname in timezones:
tz = pytz.timezone(tzname)
infos = getattr(tz, '_tzinfos', []) # not sure if can rely on attr always present?
@ -46,7 +43,7 @@ def _abbr_to_timezone_map() -> dict[str, pytz.BaseTzInfo]:
return res
@cache
@lru_cache(maxsize=None)
def abbr_to_timezone(abbr: str) -> pytz.BaseTzInfo:
return _abbr_to_timezone_map()[abbr]

View file

@ -1,15 +1,14 @@
from __future__ import annotations
from .internal import assert_subpackage
assert_subpackage(__name__)
from .internal import assert_subpackage; assert_subpackage(__name__)
from dataclasses import asdict as dataclasses_asdict
from dataclasses import is_dataclass
from datetime import datetime
from typing import Any
from typing import (
Any,
Dict,
)
Json = dict[str, Any]
Json = Dict[str, Any]
# for now just serves documentation purposes... but one day might make it statically verifiable where possible?

View file

@ -1,12 +1,10 @@
from __future__ import annotations
import os
import pkgutil
import sys
from collections.abc import Iterable
from itertools import chain
from pathlib import Path
from types import ModuleType
from typing import Iterable, List, Optional
from .discovery_pure import HPIModule, _is_not_module_src, has_stats, ignored
@ -22,14 +20,13 @@ from .discovery_pure import NOT_HPI_MODULE_VAR
assert NOT_HPI_MODULE_VAR in globals() # check name consistency
def is_not_hpi_module(module: str) -> str | None:
def is_not_hpi_module(module: str) -> Optional[str]:
'''
None if a module, otherwise returns reason
'''
import importlib.util
path: str | None = None
path: Optional[str] = None
try:
# TODO annoying, this can cause import of the parent module?
spec = importlib.util.find_spec(module)
@ -60,10 +57,9 @@ def _iter_all_importables(pkg: ModuleType) -> Iterable[HPIModule]:
def _discover_path_importables(pkg_pth: Path, pkg_name: str) -> Iterable[HPIModule]:
from .core_config import config
"""Yield all importables under a given path and package."""
from .core_config import config # noqa: F401
for dir_path, dirs, file_names in os.walk(pkg_pth):
file_names.sort()
# NOTE: sorting dirs in place is intended, it's the way you're supposed to do it with os.walk
@ -86,7 +82,6 @@ def _discover_path_importables(pkg_pth: Path, pkg_name: str) -> Iterable[HPIModu
# TODO might need to make it defensive and yield Exception (otherwise hpi doctor might fail for no good reason)
# use onerror=?
# ignored explicitly -> not HPI
# if enabled in config -> HPI
# if disabled in config -> HPI
@ -95,7 +90,7 @@ def _discover_path_importables(pkg_pth: Path, pkg_name: str) -> Iterable[HPIModu
# TODO when do we need to recurse?
def _walk_packages(path: Iterable[str], prefix: str = '', onerror=None) -> Iterable[HPIModule]:
def _walk_packages(path: Iterable[str], prefix: str='', onerror=None) -> Iterable[HPIModule]:
"""
Modified version of https://github.com/python/cpython/blob/d50a0700265536a20bcce3fb108c954746d97625/Lib/pkgutil.py#L53,
to avoid importing modules that are skipped
@ -158,9 +153,8 @@ def _walk_packages(path: Iterable[str], prefix: str = '', onerror=None) -> Itera
path = [p for p in path if not seen(p)]
yield from _walk_packages(path, mname + '.', onerror)
# deprecate?
def get_modules() -> list[HPIModule]:
def get_modules() -> List[HPIModule]:
return list(modules())
@ -175,14 +169,14 @@ def test_module_detection() -> None:
with reset() as cc:
cc.disabled_modules = ['my.location.*', 'my.body.*', 'my.workouts.*', 'my.private.*']
mods = {m.name: m for m in modules()}
assert mods['my.demo'].skip_reason == "has no 'stats()' function"
assert mods['my.demo'] .skip_reason == "has no 'stats()' function"
with reset() as cc:
cc.disabled_modules = ['my.location.*', 'my.body.*', 'my.workouts.*', 'my.private.*', 'my.lastfm']
cc.enabled_modules = ['my.demo']
mods = {m.name: m for m in modules()}
assert mods['my.demo'].skip_reason is None # not skipped
assert mods['my.demo'] .skip_reason is None # not skipped
assert mods['my.lastfm'].skip_reason == "suppressed in the user config"

View file

@ -1,7 +1,6 @@
from __future__ import annotations
import sys
from concurrent.futures import Executor, Future
from typing import Any, Callable, TypeVar
from typing import Any, Callable, Optional, TypeVar
from ..compat import ParamSpec
@ -16,7 +15,7 @@ class DummyExecutor(Executor):
but also want to provide an option to run the code serially (e.g. for debugging)
"""
def __init__(self, max_workers: int | None = 1) -> None:
def __init__(self, max_workers: Optional[int] = 1) -> None:
self._shutdown = False
self._max_workers = max_workers

View file

@ -1,27 +1,27 @@
from __future__ import annotations
import importlib
import importlib.util
import sys
from pathlib import Path
from types import ModuleType
from typing import Optional
from ..common import PathIsh
# TODO only used in tests? not sure if useful at all.
def import_file(p: Path | str, name: str | None = None) -> ModuleType:
def import_file(p: PathIsh, name: Optional[str] = None) -> ModuleType:
p = Path(p)
if name is None:
name = p.stem
spec = importlib.util.spec_from_file_location(name, p)
assert spec is not None, f"Fatal error; Could not create module spec from {name} {p}"
foo = importlib.util.module_from_spec(spec)
loader = spec.loader
assert loader is not None
loader = spec.loader; assert loader is not None
loader.exec_module(foo)
return foo
def import_from(path: Path | str, name: str) -> ModuleType:
def import_from(path: PathIsh, name: str) -> ModuleType:
path = str(path)
sys.path.append(path)
try:
@ -30,7 +30,7 @@ def import_from(path: Path | str, name: str) -> ModuleType:
sys.path.remove(path)
def import_dir(path: Path | str, extra: str = '') -> ModuleType:
def import_dir(path: PathIsh, extra: str = '') -> ModuleType:
p = Path(path)
if p.parts[0] == '~':
p = p.expanduser() # TODO eh. not sure about this..

View file

@ -4,13 +4,17 @@ Various helpers/transforms of iterators
Ideally this should be as small as possible and we should rely on stdlib itertools or more_itertools
"""
from __future__ import annotations
import warnings
from collections.abc import Hashable, Iterable, Iterator, Sized
from collections.abc import Hashable
from typing import (
TYPE_CHECKING,
Callable,
Dict,
Iterable,
Iterator,
List,
Optional,
Sized,
TypeVar,
Union,
cast,
@ -19,8 +23,9 @@ from typing import (
import more_itertools
from decorator import decorator
from .. import warnings as core_warnings
from ..compat import ParamSpec
from .. import warnings as core_warnings
T = TypeVar('T')
K = TypeVar('K')
@ -34,7 +39,7 @@ def _identity(v: T) -> V: # type: ignore[type-var]
# ugh. nothing in more_itertools?
# perhaps duplicates_everseen? but it doesn't yield non-unique elements?
def ensure_unique(it: Iterable[T], *, key: Callable[[T], K]) -> Iterable[T]:
key2item: dict[K, T] = {}
key2item: Dict[K, T] = {}
for i in it:
k = key(i)
pi = key2item.get(k, None)
@ -67,10 +72,10 @@ def make_dict(
key: Callable[[T], K],
# TODO make value optional instead? but then will need a typing override for it?
value: Callable[[T], V] = _identity,
) -> dict[K, V]:
) -> Dict[K, V]:
with_keys = ((key(i), i) for i in it)
uniques = ensure_unique(with_keys, key=lambda p: p[0])
res: dict[K, V] = {}
res: Dict[K, V] = {}
for k, i in uniques:
res[k] = i if value is None else value(i)
return res
@ -88,8 +93,8 @@ def test_make_dict() -> None:
d = make_dict(it, key=lambda i: i % 2, value=lambda i: i)
# check type inference
d2: dict[str, int] = make_dict(it, key=lambda i: str(i))
d3: dict[str, bool] = make_dict(it, key=lambda i: str(i), value=lambda i: i % 2 == 0)
d2: Dict[str, int] = make_dict(it, key=lambda i: str(i))
d3: Dict[str, bool] = make_dict(it, key=lambda i: str(i), value=lambda i: i % 2 == 0)
LFP = ParamSpec('LFP')
@ -97,7 +102,7 @@ LV = TypeVar('LV')
@decorator
def _listify(func: Callable[LFP, Iterable[LV]], *args: LFP.args, **kwargs: LFP.kwargs) -> list[LV]:
def _listify(func: Callable[LFP, Iterable[LV]], *args: LFP.args, **kwargs: LFP.kwargs) -> List[LV]:
"""
Wraps a function's return value in wrapper (e.g. list)
Useful when an algorithm can be expressed more cleanly as a generator
@ -110,7 +115,7 @@ def _listify(func: Callable[LFP, Iterable[LV]], *args: LFP.args, **kwargs: LFP.k
# so seems easiest to just use specialize instantiations of decorator instead
if TYPE_CHECKING:
def listify(func: Callable[LFP, Iterable[LV]]) -> Callable[LFP, list[LV]]: ... # noqa: ARG001
def listify(func: Callable[LFP, Iterable[LV]]) -> Callable[LFP, List[LV]]: ... # noqa: ARG001
else:
listify = _listify
@ -125,7 +130,7 @@ def test_listify() -> None:
yield 2
res = it()
assert_type(res, list[int])
assert_type(res, List[int])
assert res == [1, 2]
@ -196,24 +201,24 @@ def test_warn_if_empty_list() -> None:
ll = [1, 2, 3]
@warn_if_empty
def nonempty() -> list[int]:
def nonempty() -> List[int]:
return ll
with warnings.catch_warnings(record=True) as w:
res1 = nonempty()
assert len(w) == 0
assert_type(res1, list[int])
assert_type(res1, List[int])
assert isinstance(res1, list)
assert res1 is ll # object should be unchanged!
@warn_if_empty
def empty() -> list[str]:
def empty() -> List[str]:
return []
with warnings.catch_warnings(record=True) as w:
res2 = empty()
assert len(w) == 1
assert_type(res2, list[str])
assert_type(res2, List[str])
assert isinstance(res2, list)
assert res2 == []
@ -237,7 +242,7 @@ def check_if_hashable(iterable: Iterable[_HT]) -> Iterable[_HT]:
"""
NOTE: Despite Hashable bound, typing annotation doesn't guarantee runtime safety
Consider hashable type X, and Y that inherits from X, but not hashable
Then l: list[X] = [Y(...)] is a valid expression, and type checks against Hashable,
Then l: List[X] = [Y(...)] is a valid expression, and type checks against Hashable,
but isn't runtime hashable
"""
# Sadly this doesn't work 100% correctly with dataclasses atm...
@ -263,27 +268,28 @@ def check_if_hashable(iterable: Iterable[_HT]) -> Iterable[_HT]:
# TODO different policies -- error/warn/ignore?
def test_check_if_hashable() -> None:
from dataclasses import dataclass
from typing import Set, Tuple
import pytest
from ..compat import assert_type
x1: list[int] = [1, 2]
x1: List[int] = [1, 2]
r1 = check_if_hashable(x1)
assert_type(r1, Iterable[int])
assert r1 is x1
x2: Iterator[int | str] = iter((123, 'aba'))
x2: Iterator[Union[int, str]] = iter((123, 'aba'))
r2 = check_if_hashable(x2)
assert_type(r2, Iterable[Union[int, str]])
assert list(r2) == [123, 'aba']
x3: tuple[object, ...] = (789, 'aba')
x3: Tuple[object, ...] = (789, 'aba')
r3 = check_if_hashable(x3)
assert_type(r3, Iterable[object])
assert r3 is x3 # object should be unchanged
x4: list[set[int]] = [{1, 2, 3}, {4, 5, 6}]
x4: List[Set[int]] = [{1, 2, 3}, {4, 5, 6}]
with pytest.raises(Exception):
# should be rejected by mypy sice set isn't Hashable, but also throw at runtime
r4 = check_if_hashable(x4) # type: ignore[type-var]
@ -301,7 +307,7 @@ def test_check_if_hashable() -> None:
class X:
a: int
x6: list[X] = [X(a=123)]
x6: List[X] = [X(a=123)]
r6 = check_if_hashable(x6)
assert x6 is r6
@ -310,7 +316,7 @@ def test_check_if_hashable() -> None:
class Y(X):
b: str
x7: list[Y] = [Y(a=123, b='aba')]
x7: List[Y] = [Y(a=123, b='aba')]
with pytest.raises(Exception):
# ideally that would also be rejected by mypy, but currently there is a bug
# which treats all dataclasses as hashable: https://github.com/python/mypy/issues/11463
@ -321,12 +327,15 @@ _UET = TypeVar('_UET')
_UEU = TypeVar('_UEU')
# NOTE: for historic reasons, this function had to accept Callable that returns iterator
# NOTE: for historic reasons, this function had to accept Callable that retuns iterator
# instead of just iterator
# TODO maybe deprecated Callable support? not sure
def unique_everseen(
fun: Callable[[], Iterable[_UET]] | Iterable[_UET],
key: Callable[[_UET], _UEU] | None = None,
fun: Union[
Callable[[], Iterable[_UET]],
Iterable[_UET]
],
key: Optional[Callable[[_UET], _UEU]] = None,
) -> Iterator[_UET]:
import os
@ -358,7 +367,7 @@ def test_unique_everseen() -> None:
assert list(unique_everseen(fun_good)) == [123]
with pytest.raises(Exception):
# since function returns a list rather than iterator, check happens immediately
# since function retuns a list rather than iterator, check happens immediately
# , even without advancing the iterator
unique_everseen(fun_bad)

View file

@ -5,16 +5,14 @@ since who looks at the terminal output?
E.g. would be nice to propagate the warnings in the UI (it's even a subclass of Exception!)
'''
from __future__ import annotations
import sys
import warnings
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Optional
import click
def _colorize(x: str, color: str | None = None) -> str:
def _colorize(x: str, color: Optional[str] = None) -> str:
if color is None:
return x
@ -26,7 +24,7 @@ def _colorize(x: str, color: str | None = None) -> str:
return click.style(x, fg=color)
def _warn(message: str, *args, color: str | None = None, **kwargs) -> None:
def _warn(message: str, *args, color: Optional[str] = None, **kwargs) -> None:
stacklevel = kwargs.get('stacklevel', 1)
kwargs['stacklevel'] = stacklevel + 2 # +1 for this function, +1 for medium/high wrapper
warnings.warn(_colorize(message, color=color), *args, **kwargs) # noqa: B028

View file

@ -1,14 +1,12 @@
'''
Just a demo module for testing and documentation purposes
'''
from __future__ import annotations
import json
from collections.abc import Iterable, Sequence
from dataclasses import dataclass
from datetime import datetime, timezone, tzinfo
from pathlib import Path
from typing import Protocol
from typing import Iterable, Optional, Protocol, Sequence
from my.core import Json, PathIsh, Paths, get_files
@ -22,7 +20,7 @@ class config(Protocol):
# this is to check optional attribute handling
timezone: tzinfo = timezone.utc
external: PathIsh | None = None
external: Optional[PathIsh] = None
@property
def external_module(self):

View file

@ -4,33 +4,30 @@
Consumes data exported by https://github.com/karlicoss/emfitexport
"""
from __future__ import annotations
REQUIRES = [
'git+https://github.com/karlicoss/emfitexport',
]
import dataclasses
import inspect
from collections.abc import Iterable, Iterator
from contextlib import contextmanager
import dataclasses
from datetime import datetime, time, timedelta
import inspect
from pathlib import Path
from typing import Any
import emfitexport.dal as dal
from typing import Any, Dict, Iterable, Iterator, List, Optional
from my.core import (
Res,
Stats,
get_files,
stat,
Res,
Stats,
)
from my.core.cachew import cache_dir, mcachew
from my.core.error import extract_error_datetime, set_error_datetime
from my.core.error import set_error_datetime, extract_error_datetime
from my.core.pandas import DataFrameT
from my.config import emfit as config # isort: skip
from my.config import emfit as config
import emfitexport.dal as dal
Emfit = dal.Emfit
@ -88,7 +85,7 @@ def datas() -> Iterable[Res[Emfit]]:
# TODO should be used for jawbone data as well?
def pre_dataframe() -> Iterable[Res[Emfit]]:
# TODO shit. I need some sort of interrupted sleep detection?
g: list[Emfit] = []
g: List[Emfit] = []
def flush() -> Iterable[Res[Emfit]]:
if len(g) == 0:
@ -115,10 +112,10 @@ def pre_dataframe() -> Iterable[Res[Emfit]]:
def dataframe() -> DataFrameT:
dicts: list[dict[str, Any]] = []
last: Emfit | None = None
dicts: List[Dict[str, Any]] = []
last: Optional[Emfit] = None
for s in pre_dataframe():
d: dict[str, Any]
d: Dict[str, Any]
if isinstance(s, Exception):
edt = extract_error_datetime(s)
d = {
@ -169,12 +166,11 @@ def stats() -> Stats:
@contextmanager
def fake_data(nights: int = 500) -> Iterator:
from my.core.cfg import tmp_config
from tempfile import TemporaryDirectory
import pytz
from my.core.cfg import tmp_config
with TemporaryDirectory() as td:
tdir = Path(td)
gen = dal.FakeData()
@ -191,7 +187,7 @@ def fake_data(nights: int = 500) -> Iterator:
# TODO remove/deprecate it? I think used by timeline
def get_datas() -> list[Emfit]:
def get_datas() -> List[Emfit]:
# todo ugh. run lint properly
return sorted(datas(), key=lambda e: e.start) # type: ignore

View file

@ -7,14 +7,13 @@ REQUIRES = [
]
# todo use ast in setup.py or doctor to extract the corresponding pip packages?
from collections.abc import Iterable, Sequence
from dataclasses import dataclass
from pathlib import Path
from my.config import endomondo as user_config
from typing import Sequence, Iterable
from .core import Paths, get_files
from my.config import endomondo as user_config
@dataclass
class endomondo(user_config):
@ -34,17 +33,15 @@ def inputs() -> Sequence[Path]:
import endoexport.dal as dal
from endoexport.dal import Point, Workout # noqa: F401
from .core import Res
# todo cachew?
def workouts() -> Iterable[Res[Workout]]:
_dal = dal.DAL(inputs())
yield from _dal.workouts()
from .core.pandas import DataFrameT, check_dataframe
from .core.pandas import check_dataframe, DataFrameT
@check_dataframe
def dataframe(*, defensive: bool=True) -> DataFrameT:
@ -78,9 +75,7 @@ def dataframe(*, defensive: bool=True) -> DataFrameT:
return df
from .core import Stats, stat
from .core import stat, Stats
def stats() -> Stats:
return {
# todo pretty print stats?
@ -91,16 +86,13 @@ def stats() -> Stats:
# TODO make sure it's possible to 'advise' functions and override stuff
from collections.abc import Iterator
from contextlib import contextmanager
from typing import Iterator
@contextmanager
def fake_data(count: int=100) -> Iterator:
import json
from tempfile import TemporaryDirectory
from my.core.cfg import tmp_config
from tempfile import TemporaryDirectory
import json
with TemporaryDirectory() as td:
tdir = Path(td)
fd = dal.FakeData()

View file

@ -1,6 +1,6 @@
from .core.warnings import high
high("DEPRECATED! Please use my.core.error instead.")
from .core import __NOT_HPI_MODULE__
from .core.error import *

View file

@ -1,6 +1,5 @@
from collections.abc import Iterator
from dataclasses import dataclass
from typing import Any
from typing import Any, Iterator, List, Tuple
from my.core.compat import NoneType, assert_never
@ -10,7 +9,7 @@ from my.core.compat import NoneType, assert_never
class Helper:
manager: 'Manager'
item: Any # todo realistically, list or dict? could at least type as indexable or something
path: tuple[str, ...]
path: Tuple[str, ...]
def pop_if_primitive(self, *keys: str) -> None:
"""
@ -41,9 +40,9 @@ def is_empty(x) -> bool:
class Manager:
def __init__(self) -> None:
self.helpers: list[Helper] = []
self.helpers: List[Helper] = []
def helper(self, item: Any, *, path: tuple[str, ...] = ()) -> Helper:
def helper(self, item: Any, *, path: Tuple[str, ...] = ()) -> Helper:
res = Helper(manager=self, item=item, path=path)
self.helpers.append(res)
return res

View file

@ -9,7 +9,7 @@ since that allows for easier overriding using namespace packages
See https://github.com/karlicoss/HPI/blob/master/doc/MODULE_DESIGN.org#allpy for more info.
"""
# prevent it from appearing in modules list/doctor
# prevent it from apprearing in modules list/doctor
from ..core import __NOT_HPI_MODULE__
# kinda annoying to keep it, but it's so legacy 'hpi module install my.fbmessenger' works
@ -20,7 +20,6 @@ REQUIRES = [
from my.core.hpi_compat import handle_legacy_import
is_legacy_import = handle_legacy_import(
parent_module_name=__name__,
legacy_submodule_name='export',

View file

@ -1,10 +1,10 @@
from collections.abc import Iterator
from my.core import Res, Stats
from typing import Iterator
from my.core import Res, stat, Stats
from my.core.source import import_source
from .common import Message, _merge_messages
src_export = import_source(module_name='my.fbmessenger.export')
src_android = import_source(module_name='my.fbmessenger.android')

View file

@ -4,20 +4,19 @@ Messenger data from Android app database (in =/data/data/com.facebook.orca/datab
from __future__ import annotations
import sqlite3
from collections.abc import Iterator, Sequence
from dataclasses import dataclass
from datetime import datetime, timezone
from pathlib import Path
from typing import Union
import sqlite3
from typing import Iterator, Sequence, Optional, Dict, Union, List
from my.core import LazyLogger, Paths, Res, datetime_aware, get_files, make_config
from my.core import get_files, Paths, datetime_aware, Res, LazyLogger, make_config
from my.core.common import unique_everseen
from my.core.compat import assert_never
from my.core.error import echain
from my.core.sqlite import sqlite_connection, SqliteTool
from my.core.sqlite import sqlite_connection
from my.config import fbmessenger as user_config # isort: skip
from my.config import fbmessenger as user_config
logger = LazyLogger(__name__)
@ -28,7 +27,7 @@ class Config(user_config.android):
# paths[s]/glob to the exported sqlite databases
export_path: Paths
facebook_id: str | None = None
facebook_id: Optional[str] = None
# hmm. this is necessary for default value (= None) to work
@ -43,13 +42,13 @@ def inputs() -> Sequence[Path]:
@dataclass(unsafe_hash=True)
class Sender:
id: str
name: str | None
name: Optional[str]
@dataclass(unsafe_hash=True)
class Thread:
id: str
name: str | None # isn't set for groups or one to one messages
name: Optional[str] # isn't set for groups or one to one messages
# todo not sure about order of fields...
@ -57,14 +56,14 @@ class Thread:
class _BaseMessage:
id: str
dt: datetime_aware
text: str | None
text: Optional[str]
@dataclass(unsafe_hash=True)
class _Message(_BaseMessage):
thread_id: str
sender_id: str
reply_to_id: str | None
reply_to_id: Optional[str]
# todo hmm, on the one hand would be kinda nice to inherit common.Message protocol here
@ -73,7 +72,7 @@ class _Message(_BaseMessage):
class Message(_BaseMessage):
thread: Thread
sender: Sender
reply_to: Message | None
reply_to: Optional[Message]
Entity = Union[Sender, Thread, _Message]
@ -86,8 +85,8 @@ def _entities() -> Iterator[Res[Entity]]:
for idx, path in enumerate(paths):
logger.info(f'processing [{idx:>{width}}/{total:>{width}}] {path}')
with sqlite_connection(path, immutable=True, row_factory='row') as db:
use_msys = "logging_events_v2" in SqliteTool(db).get_table_names()
try:
use_msys = len(list(db.execute('SELECT * FROM sqlite_master WHERE name = "logging_events_v2"'))) > 0
if use_msys:
yield from _process_db_msys(db)
else:
@ -111,7 +110,7 @@ def _normalise_thread_id(key) -> str:
# NOTE: this is sort of copy pasted from other _process_db method
# maybe later could unify them
def _process_db_msys(db: sqlite3.Connection) -> Iterator[Res[Entity]]:
senders: dict[str, Sender] = {}
senders: Dict[str, Sender] = {}
for r in db.execute('SELECT CAST(id AS TEXT) AS id, name FROM contacts'):
s = Sender(
id=r['id'], # looks like it's server id? same used on facebook site
@ -128,7 +127,7 @@ def _process_db_msys(db: sqlite3.Connection) -> Iterator[Res[Entity]]:
# TODO can we get it from db? could infer as the most common id perhaps?
self_id = config.facebook_id
thread_users: dict[str, list[Sender]] = {}
thread_users: Dict[str, List[Sender]] = {}
for r in db.execute('SELECT CAST(thread_key AS TEXT) AS thread_key, CAST(contact_id AS TEXT) AS contact_id FROM participants'):
thread_key = r['thread_key']
user_key = r['contact_id']
@ -174,7 +173,7 @@ def _process_db_msys(db: sqlite3.Connection) -> Iterator[Res[Entity]]:
However seems that when message is not sent yet it doesn't have this server id yet
(happened only once, but could be just luck of course!)
We exclude these messages to avoid duplication.
However poisitive filter (e.g. message_id LIKE 'mid%') feels a bit wrong, e.g. what if message ids change or something
However poisitive filter (e.g. message_id LIKE 'mid%') feels a bit wrong, e.g. what if mesage ids change or something
So instead this excludes only such unsent messages.
*/
message_id != offline_threading_id
@ -194,7 +193,7 @@ def _process_db_msys(db: sqlite3.Connection) -> Iterator[Res[Entity]]:
def _process_db_threads_db2(db: sqlite3.Connection) -> Iterator[Res[Entity]]:
senders: dict[str, Sender] = {}
senders: Dict[str, Sender] = {}
for r in db.execute('''SELECT * FROM thread_users'''):
# for messaging_actor_type == 'REDUCED_MESSAGING_ACTOR', name is None
# but they are still referenced, so need to keep
@ -208,7 +207,7 @@ def _process_db_threads_db2(db: sqlite3.Connection) -> Iterator[Res[Entity]]:
yield s
self_id = config.facebook_id
thread_users: dict[str, list[Sender]] = {}
thread_users: Dict[str, List[Sender]] = {}
for r in db.execute('SELECT * from thread_participants'):
thread_key = r['thread_key']
user_key = r['user_key']
@ -268,9 +267,9 @@ def contacts() -> Iterator[Res[Sender]]:
def messages() -> Iterator[Res[Message]]:
senders: dict[str, Sender] = {}
msgs: dict[str, Message] = {}
threads: dict[str, Thread] = {}
senders: Dict[str, Sender] = {}
msgs: Dict[str, Message] = {}
threads: Dict[str, Thread] = {}
for x in unique_everseen(_entities):
if isinstance(x, Exception):
yield x

View file

@ -1,9 +1,6 @@
from __future__ import annotations
from my.core import __NOT_HPI_MODULE__
from my.core import __NOT_HPI_MODULE__ # isort: skip
from collections.abc import Iterator
from typing import Protocol
from typing import Iterator, Optional, Protocol
from my.core import datetime_aware
@ -13,7 +10,7 @@ class Thread(Protocol):
def id(self) -> str: ...
@property
def name(self) -> str | None: ...
def name(self) -> Optional[str]: ...
class Sender(Protocol):
@ -21,7 +18,7 @@ class Sender(Protocol):
def id(self) -> str: ...
@property
def name(self) -> str | None: ...
def name(self) -> Optional[str]: ...
class Message(Protocol):
@ -32,7 +29,7 @@ class Message(Protocol):
def dt(self) -> datetime_aware: ...
@property
def text(self) -> str | None: ...
def text(self) -> Optional[str]: ...
@property
def thread(self) -> Thread: ...
@ -42,11 +39,8 @@ class Message(Protocol):
from itertools import chain
from more_itertools import unique_everseen
from my.core import Res, warn_if_empty
from my.core import warn_if_empty, Res
@warn_if_empty
def _merge_messages(*sources: Iterator[Res[Message]]) -> Iterator[Res[Message]]:

View file

@ -7,15 +7,16 @@ REQUIRES = [
'git+https://github.com/karlicoss/fbmessengerexport',
]
from collections.abc import Iterator
from contextlib import ExitStack, contextmanager
from dataclasses import dataclass
from typing import Iterator
from my.core import PathIsh, Res, stat, Stats
from my.core.warnings import high
from my.config import fbmessenger as user_config
import fbmessengerexport.dal as messenger
from my.config import fbmessenger as user_config
from my.core import PathIsh, Res, Stats, stat
from my.core.warnings import high
###
# support old style config

View file

@ -2,14 +2,15 @@
Foursquare/Swarm checkins
'''
import json
from datetime import datetime, timedelta, timezone
from datetime import datetime, timezone, timedelta
from itertools import chain
from my.config import foursquare as config
import json
# TODO pytz for timezone???
from my.core import get_files, make_logger
from my.config import foursquare as config
logger = make_logger(__name__)

View file

@ -3,7 +3,8 @@ Unified Github data (merged from GDPR export and periodic API updates)
"""
from . import gdpr, ghexport
from .common import Results, merge_events
from .common import merge_events, Results
def events() -> Results:

View file

@ -1,27 +1,24 @@
"""
Github events and their metadata: comments/issues/pull requests
"""
from __future__ import annotations
from my.core import __NOT_HPI_MODULE__ # isort: skip
from ..core import __NOT_HPI_MODULE__
from collections.abc import Iterable
from datetime import datetime, timezone
from typing import NamedTuple, Optional
from typing import Optional, NamedTuple, Iterable, Set, Tuple
from my.core import make_logger, warn_if_empty
from my.core.error import Res
from ..core import warn_if_empty, LazyLogger
from ..core.error import Res
logger = make_logger(__name__)
logger = LazyLogger(__name__)
class Event(NamedTuple):
dt: datetime
summary: str
eid: str
link: Optional[str]
body: Optional[str] = None
body: Optional[str]=None
is_bot: bool = False
@ -30,7 +27,7 @@ Results = Iterable[Res[Event]]
@warn_if_empty
def merge_events(*sources: Results) -> Results:
from itertools import chain
emitted: set[tuple[datetime, str]] = set()
emitted: Set[Tuple[datetime, str]] = set()
for e in chain(*sources):
if isinstance(e, Exception):
yield e
@ -55,7 +52,7 @@ def parse_dt(s: str) -> datetime:
# experimental way of supportint event ids... not sure
class EventIds:
@staticmethod
def repo_created(*, dts: str, name: str, ref_type: str, ref: str | None) -> str:
def repo_created(*, dts: str, name: str, ref_type: str, ref: Optional[str]) -> str:
return f'{dts}_repocreated_{name}_{ref_type}_{ref}'
@staticmethod

View file

@ -6,9 +6,8 @@ from __future__ import annotations
import json
from abc import abstractmethod
from collections.abc import Iterator, Sequence
from pathlib import Path
from typing import Any
from typing import Any, Iterator, Sequence
from my.core import Paths, Res, Stats, get_files, make_logger, stat, warnings
from my.core.error import echain

View file

@ -1,17 +1,13 @@
"""
Github data: events, comments, etc. (API data)
"""
from __future__ import annotations
REQUIRES = [
'git+https://github.com/karlicoss/ghexport',
]
from dataclasses import dataclass
from my.config import github as user_config
from my.core import Paths
from my.config import github as user_config
@dataclass
@ -25,9 +21,7 @@ class github(user_config):
###
from my.core.cfg import Attrs, make_config
from my.core.cfg import make_config, Attrs
def migration(attrs: Attrs) -> Attrs:
export_dir = 'export_dir'
if export_dir in attrs: # legacy name
@ -47,14 +41,15 @@ except ModuleNotFoundError as e:
############################
from collections.abc import Sequence
from functools import lru_cache
from pathlib import Path
from typing import Tuple, Dict, Sequence, Optional
from my.core import LazyLogger, get_files
from my.core import get_files, LazyLogger
from my.core.cachew import mcachew
from .common import Event, EventIds, Results, parse_dt
from .common import Event, parse_dt, Results, EventIds
logger = LazyLogger(__name__)
@ -87,9 +82,7 @@ def _events() -> Results:
yield e
from my.core import Stats, stat
from my.core import stat, Stats
def stats() -> Stats:
return {
**stat(events),
@ -106,7 +99,7 @@ def _log_if_unhandled(e) -> None:
Link = str
EventId = str
Body = str
def _get_summary(e) -> tuple[str, Link | None, EventId | None, Body | None]:
def _get_summary(e) -> Tuple[str, Optional[Link], Optional[EventId], Optional[Body]]:
# TODO would be nice to give access to raw event within timeline
dts = e['created_at']
eid = e['id']
@ -202,7 +195,7 @@ def _get_summary(e) -> tuple[str, Link | None, EventId | None, Body | None]:
return tp, None, None, None
def _parse_event(d: dict) -> Event:
def _parse_event(d: Dict) -> Event:
summary, link, eid, body = _get_summary(d)
if eid is None:
eid = d['id'] # meh

View file

@ -7,18 +7,15 @@ REQUIRES = [
from dataclasses import dataclass
from my.core import datetime_aware, Paths
from my.config import goodreads as user_config
from my.core import Paths, datetime_aware
@dataclass
class goodreads(user_config):
# paths[s]/glob to the exported JSON data
export_path: Paths
from my.core.cfg import Attrs, make_config
from my.core.cfg import make_config, Attrs
def _migration(attrs: Attrs) -> Attrs:
export_dir = 'export_dir'
@ -32,19 +29,18 @@ config = make_config(goodreads, migration=_migration)
#############################3
from collections.abc import Iterator, Sequence
from pathlib import Path
from my.core import get_files
from typing import Sequence, Iterator
from pathlib import Path
def inputs() -> Sequence[Path]:
return get_files(config.export_path)
from datetime import datetime
import pytz
from goodrexport import dal

View file

@ -1,8 +1,8 @@
from my.core import __NOT_HPI_MODULE__ # isort: skip
from my.core import __NOT_HPI_MODULE__
# NOTE: this tool was quite useful https://github.com/aj3423/aproto
from google.protobuf import descriptor_pb2, descriptor_pool, message_factory
from google.protobuf import descriptor_pool, descriptor_pb2, message_factory
TYPE_STRING = descriptor_pb2.FieldDescriptorProto.TYPE_STRING
TYPE_BYTES = descriptor_pb2.FieldDescriptorProto.TYPE_BYTES

View file

@ -7,20 +7,20 @@ REQUIRES = [
"protobuf", # for parsing blobs from the database
]
from collections.abc import Iterator, Sequence
from dataclasses import dataclass
from datetime import datetime, timezone
from pathlib import Path
from typing import Any
from typing import Any, Iterator, Optional, Sequence
from urllib.parse import quote
from my.core import LazyLogger, Paths, Res, datetime_aware, get_files
from my.core import datetime_aware, get_files, LazyLogger, Paths, Res
from my.core.common import unique_everseen
from my.core.sqlite import sqlite_connection
import my.config
from ._android_protobuf import parse_labeled, parse_list, parse_place
import my.config # isort: skip
logger = LazyLogger(__name__)
@ -59,8 +59,8 @@ class Place:
updated_at: datetime_aware # TODO double check it's utc?
title: str
location: Location
address: str | None
note: str | None
address: Optional[str]
note: Optional[str]
@property
def place_url(self) -> str:

View file

@ -2,22 +2,18 @@
Google Takeout exports: browsing history, search/youtube/google play activity
'''
from __future__ import annotations
from my.core import __NOT_HPI_MODULE__ # isort: skip
import re
from collections.abc import Iterable
from datetime import datetime
from enum import Enum
from html.parser import HTMLParser
import re
from pathlib import Path
from typing import Any, Callable
from datetime import datetime
from html.parser import HTMLParser
from typing import List, Optional, Any, Callable, Iterable, Tuple
from urllib.parse import unquote
import pytz
from my.core.time import abbr_to_timezone
from ...core.time import abbr_to_timezone
# NOTE: https://bugs.python.org/issue22377 %Z doesn't work properly
_TIME_FORMATS = [
@ -40,7 +36,7 @@ def parse_dt(s: str) -> datetime:
s, tzabbr = s.rsplit(maxsplit=1)
tz = abbr_to_timezone(tzabbr)
dt: datetime | None = None
dt: Optional[datetime] = None
for fmt in _TIME_FORMATS:
try:
dt = datetime.strptime(s, fmt)
@ -77,7 +73,7 @@ class State(Enum):
Url = str
Title = str
Parsed = tuple[datetime, Url, Title]
Parsed = Tuple[datetime, Url, Title]
Callback = Callable[[datetime, Url, Title], None]
@ -87,9 +83,9 @@ class TakeoutHTMLParser(HTMLParser):
super().__init__()
self.state: State = State.OUTSIDE
self.title_parts: list[str] = []
self.title: str | None = None
self.url: str | None = None
self.title_parts: List[str] = []
self.title: Optional[str] = None
self.url: Optional[str] = None
self.callback = callback
@ -152,7 +148,7 @@ class TakeoutHTMLParser(HTMLParser):
def read_html(tpath: Path, file: str) -> Iterable[Parsed]:
results: list[Parsed] = []
results: List[Parsed] = []
def cb(dt: datetime, url: Url, title: Title) -> None:
results.append((dt, url, title))
parser = TakeoutHTMLParser(callback=cb)
@ -160,3 +156,5 @@ def read_html(tpath: Path, file: str) -> Iterable[Parsed]:
data = fo.read()
parser.feed(data)
return results
from ...core import __NOT_HPI_MODULE__

View file

@ -1,7 +1,7 @@
"""
Parses Google Takeout using [[https://github.com/purarue/google_takeout_parser][google_takeout_parser]]
Parses Google Takeout using [[https://github.com/seanbreckenridge/google_takeout_parser][google_takeout_parser]]
See [[https://github.com/purarue/google_takeout_parser][google_takeout_parser]] for more information
See [[https://github.com/seanbreckenridge/google_takeout_parser][google_takeout_parser]] for more information
about how to export and organize your takeouts
If the DISABLE_TAKEOUT_CACHE environment variable is set, this won't cache individual
@ -12,31 +12,28 @@ zip files of the exports, which are temporarily unpacked while creating
the cachew cache
"""
REQUIRES = ["git+https://github.com/purarue/google_takeout_parser"]
REQUIRES = ["git+https://github.com/seanbreckenridge/google_takeout_parser"]
import os
from collections.abc import Sequence
from contextlib import ExitStack
from dataclasses import dataclass
import os
from typing import List, Sequence, cast
from pathlib import Path
from typing import cast
from google_takeout_parser.parse_html.html_time_utils import ABBR_TIMEZONES
from my.core import Paths, Stats, get_files, make_config, make_logger, stat
from my.core import make_config, stat, Stats, get_files, Paths, make_logger
from my.core.cachew import mcachew
from my.core.error import ErrorPolicy
from my.core.structure import match_structure
from my.core.time import user_forced
from my.core.time import user_forced
from google_takeout_parser.parse_html.html_time_utils import ABBR_TIMEZONES
ABBR_TIMEZONES.extend(user_forced())
import google_takeout_parser
from google_takeout_parser.merge import CacheResults, GoogleEventSet
from google_takeout_parser.models import BaseEvent
from google_takeout_parser.path_dispatch import TakeoutParser
from google_takeout_parser.merge import GoogleEventSet, CacheResults
from google_takeout_parser.models import BaseEvent
# see https://github.com/purarue/dotfiles/blob/master/.config/my/my/config/__init__.py for an example
# see https://github.com/seanbreckenridge/dotfiles/blob/master/.config/my/my/config/__init__.py for an example
from my.config import google as user_config
@ -59,7 +56,6 @@ logger = make_logger(__name__, level="warning")
# patch the takeout parser logger to match the computed loglevel
from google_takeout_parser.log import setup as setup_takeout_logger
setup_takeout_logger(logger.level)
@ -87,7 +83,7 @@ except ImportError:
google_takeout_version = str(getattr(google_takeout_parser, '__version__', 'unknown'))
def _cachew_depends_on() -> list[str]:
def _cachew_depends_on() -> List[str]:
exports = sorted([str(p) for p in inputs()])
# add google takeout parser pip version to hash, so this re-creates on breaking changes
exports.insert(0, f"google_takeout_version: {google_takeout_version}")
@ -123,7 +119,7 @@ def events(disable_takeout_cache: bool = DISABLE_TAKEOUT_CACHE) -> CacheResults:
else:
results = exit_stack.enter_context(match_structure(path, expected=EXPECTED, partial=True))
for m in results:
# e.g. /home/username/data/google_takeout/Takeout-1634932457.zip") -> 'Takeout-1634932457'
# e.g. /home/sean/data/google_takeout/Takeout-1634932457.zip") -> 'Takeout-1634932457'
# means that zipped takeouts have nice filenames from cachew
cw_id, _, _ = path.name.rpartition(".")
# each takeout result is cached as well, in individual databases per-type

View file

@ -2,17 +2,13 @@
Module for locating and accessing [[https://takeout.google.com][Google Takeout]] data
'''
from __future__ import annotations
from my.core import __NOT_HPI_MODULE__ # isort: skip
from abc import abstractmethod
from collections.abc import Iterable
from pathlib import Path
from typing import Iterable, Optional, Protocol
from more_itertools import last
from my.core import Paths, get_files
from my.core import __NOT_HPI_MODULE__, Paths, get_files
class config:
@ -37,7 +33,7 @@ def make_config() -> config:
return combined_config()
def get_takeouts(*, path: str | None = None) -> Iterable[Path]:
def get_takeouts(*, path: Optional[str] = None) -> Iterable[Path]:
"""
Sometimes google splits takeout into multiple archives, so we need to detect the ones that contain the path we need
"""
@ -49,7 +45,7 @@ def get_takeouts(*, path: str | None = None) -> Iterable[Path]:
yield takeout
def get_last_takeout(*, path: str | None = None) -> Path | None:
def get_last_takeout(*, path: Optional[str] = None) -> Optional[Path]:
return last(get_takeouts(path=path), default=None)

View file

@ -3,14 +3,14 @@ Hackernews data via Dogsheep [[hacker-news-to-sqlite][https://github.com/dogshee
"""
from __future__ import annotations
from collections.abc import Iterator, Sequence
from dataclasses import dataclass
from datetime import datetime, timezone
from pathlib import Path
from typing import Iterator, Sequence, Optional
import my.config
from my.core import Paths, Res, datetime_aware, get_files
from my.core import get_files, Paths, Res, datetime_aware
from my.core.sqlite import sqlite_connection
import my.config
from .common import hackernews_link
@ -33,9 +33,9 @@ class Item:
id: str
type: str
created: datetime_aware # checked and it's utc
title: str | None # only present for Story
text_html: str | None # should be present for Comment and might for Story
url: str | None # might be present for Story
title: Optional[str] # only present for Story
text_html: Optional[str] # should be present for Comment and might for Story
url: Optional[str] # might be present for Story
# todo process 'deleted'? fields?
# todo process 'parent'?

View file

@ -1,22 +1,17 @@
"""
[[https://play.google.com/store/apps/details?id=com.simon.harmonichackernews][Harmonic]] app for Hackernews
"""
from __future__ import annotations
REQUIRES = ['lxml', 'orjson']
from collections.abc import Iterator, Sequence
from dataclasses import dataclass
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, TypedDict, cast
import orjson
from pathlib import Path
from typing import Any, Dict, Iterator, List, Optional, Sequence, TypedDict, cast
from lxml import etree
from more_itertools import one
import my.config
from my.core import (
Paths,
Res,
@ -27,10 +22,8 @@ from my.core import (
stat,
)
from my.core.common import unique_everseen
from .common import SavedBase, hackernews_link
import my.config # isort: skip
import my.config
from .common import hackernews_link, SavedBase
logger = make_logger(__name__)
@ -50,7 +43,7 @@ class Cached(TypedDict):
created_at_i: int
id: str
points: int
test: str | None
test: Optional[str]
title: str
type: str # TODO Literal['story', 'comment']? comments are only in 'children' field tho
url: str
@ -101,16 +94,16 @@ def _saved() -> Iterator[Res[Saved]]:
# TODO defensive for each item!
tr = etree.parse(path)
res = one(cast(list[Any], tr.xpath(f'//*[@name="{_PREFIX}_CACHED_STORIES_STRINGS"]')))
res = one(cast(List[Any], tr.xpath(f'//*[@name="{_PREFIX}_CACHED_STORIES_STRINGS"]')))
cached_ids = [x.text.split('-')[0] for x in res]
cached: dict[str, Cached] = {}
cached: Dict[str, Cached] = {}
for sid in cached_ids:
res = one(cast(list[Any], tr.xpath(f'//*[@name="{_PREFIX}_CACHED_STORY{sid}"]')))
res = one(cast(List[Any], tr.xpath(f'//*[@name="{_PREFIX}_CACHED_STORY{sid}"]')))
j = orjson.loads(res.text)
cached[sid] = j
res = one(cast(list[Any], tr.xpath(f'//*[@name="{_PREFIX}_BOOKMARKS"]')))
res = one(cast(List[Any], tr.xpath(f'//*[@name="{_PREFIX}_BOOKMARKS"]')))
for x in res.text.split('-'):
ids, item_timestamp = x.split('q')
# not sure if timestamp is any useful?

View file

@ -1,20 +1,19 @@
"""
[[https://play.google.com/store/apps/details?id=io.github.hidroh.materialistic][Materialistic]] app for Hackernews
"""
from collections.abc import Iterator, Sequence
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, NamedTuple
from typing import Any, Dict, Iterator, NamedTuple, Sequence
from more_itertools import unique_everseen
from my.core import datetime_aware, get_files, make_logger
from my.core import get_files, datetime_aware, make_logger
from my.core.sqlite import sqlite_connection
from my.config import materialistic as config # todo migrate config to my.hackernews.materialistic
from .common import hackernews_link
# todo migrate config to my.hackernews.materialistic
from my.config import materialistic as config # isort: skip
logger = make_logger(__name__)
@ -23,7 +22,7 @@ def inputs() -> Sequence[Path]:
return get_files(config.export_path)
Row = dict[str, Any]
Row = Dict[str, Any]
class Saved(NamedTuple):

View file

@ -4,22 +4,20 @@
REQUIRES = [
'git+https://github.com/karlicoss/hypexport',
]
from collections.abc import Iterator, Sequence
from dataclasses import dataclass
from pathlib import Path
from typing import TYPE_CHECKING
from typing import Iterator, Sequence, TYPE_CHECKING
from my.core import (
get_files,
stat,
Paths,
Res,
Stats,
get_files,
stat,
)
from my.core.cfg import make_config
from my.core.hpi_compat import always_supports_sequence
import my.config # isort: skip
import my.config
@dataclass

Some files were not shown because too many files have changed in this diff Show more