mirror of
https://github.com/craigerl/aprsd.git
synced 2024-11-10 10:33:31 -05:00
Compare commits
No commits in common. "master" and "v3.3.0" have entirely different histories.
3
.github/workflows/manual_build.yml
vendored
3
.github/workflows/manual_build.yml
vendored
@ -43,9 +43,8 @@ jobs:
|
||||
with:
|
||||
context: "{{defaultContext}}:docker"
|
||||
platforms: linux/amd64,linux/arm64
|
||||
file: ./Dockerfile
|
||||
file: ./Dockerfile-dev
|
||||
build-args: |
|
||||
INSTALL_TYPE=github
|
||||
BRANCH=${{ steps.extract_branch.outputs.branch }}
|
||||
BUILDX_QEMU_ENV=true
|
||||
push: true
|
||||
|
5
.github/workflows/master-build.yml
vendored
5
.github/workflows/master-build.yml
vendored
@ -17,7 +17,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ["3.10", "3.11"]
|
||||
python-version: ["3.9", "3.10", "3.11"]
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
@ -53,9 +53,8 @@ jobs:
|
||||
with:
|
||||
context: "{{defaultContext}}:docker"
|
||||
platforms: linux/amd64,linux/arm64
|
||||
file: ./Dockerfile
|
||||
file: ./Dockerfile-dev
|
||||
build-args: |
|
||||
INSTALL_TYPE=github
|
||||
BRANCH=${{ steps.branch-name.outputs.current_branch }}
|
||||
BUILDX_QEMU_ENV=true
|
||||
push: true
|
||||
|
2
.github/workflows/python.yml
vendored
2
.github/workflows/python.yml
vendored
@ -7,7 +7,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ["3.10", "3.11"]
|
||||
python-version: ["3.9", "3.10", "3.11"]
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
|
937
ChangeLog
Normal file
937
ChangeLog
Normal file
@ -0,0 +1,937 @@
|
||||
CHANGES
|
||||
=======
|
||||
|
||||
v3.3.0
|
||||
------
|
||||
|
||||
* sample-config fix
|
||||
* Fixed registry url post
|
||||
* Changed processpkt message
|
||||
* Fixed RegistryThread not sending requests
|
||||
* use log.setup\_logging
|
||||
* Disable debug logs for aprslib
|
||||
* Make registry thread sleep
|
||||
* Put threads first after date/time
|
||||
* Replace slow rich logging with loguru
|
||||
* Updated requirements
|
||||
* Fixed pep8
|
||||
* Added list-extensions and updated README.rst
|
||||
* Change defaults for beacon and registry
|
||||
* Add log info for Beacon and Registry threads
|
||||
* fixed frequency\_seconds to IntOpt
|
||||
* fixed references to conf
|
||||
* changed the default packet timeout to 5 minutes
|
||||
* Fixed default service registry url
|
||||
* fix pep8 failures
|
||||
* py311 fails in github
|
||||
* Don't send uptime to registry
|
||||
* Added sending software string to registry
|
||||
* add py310 gh actions
|
||||
* Added the new APRS Registry thread
|
||||
* Added installing extensions to Docker run
|
||||
* Cleanup some logs
|
||||
* Added BeaconPacket
|
||||
* updated requirements files
|
||||
* removed some unneeded code
|
||||
* Added iterator to objectstore
|
||||
* Added some missing classes to threads
|
||||
* Added support for loading extensions
|
||||
* Added location for callsign tabs in webchat
|
||||
* updated gitignore
|
||||
* Create codeql.yml
|
||||
* update github action branchs to v8
|
||||
* Added Location info on webchat interface
|
||||
* Updated dev test-plugin command
|
||||
* Update requirements.txt
|
||||
* Update for v3.2.3
|
||||
|
||||
v3.2.3
|
||||
------
|
||||
|
||||
* Force fortune path during setup test
|
||||
* added /usr/games to path
|
||||
* Added fortune to Dockerfile-dev
|
||||
* Added missing fortune app
|
||||
* aprsd: main.py: Fix premature return in sample\_config
|
||||
* Update weather.py because you can't sort icons by penis
|
||||
* Update weather.py both weather plugins have new Ww regex
|
||||
* Update weather.py
|
||||
* Fixed a bug with OWMWeatherPlugin
|
||||
* Rework Location Plugin
|
||||
|
||||
v3.2.2
|
||||
------
|
||||
|
||||
* Update for v3.2.2 release
|
||||
* Fix for types
|
||||
* Fix wsgi for prod
|
||||
* pep8 fixes
|
||||
* remove python 3.12 from github builds
|
||||
* Fixed datetime access in core.py
|
||||
* removed invalid reference to config.py
|
||||
* Updated requirements
|
||||
* Reworked the admin graphs
|
||||
* Test new packet serialization
|
||||
* Try to localize js libs and css for no internet
|
||||
* Normalize listen --aprs-login
|
||||
* Bump werkzeug from 2.3.7 to 3.0.1
|
||||
* Update INSTALL with new conf files
|
||||
* Bump urllib3 from 2.0.6 to 2.0.7
|
||||
|
||||
v3.2.1
|
||||
------
|
||||
|
||||
* Changelog for 3.2.1
|
||||
* Update index.html disable form autocomplete
|
||||
* Update the packet\_dupe\_timeout warning
|
||||
* Update the webchat paths
|
||||
* Changed the path option to a ListOpt
|
||||
* Fixed default path for tcp\_kiss client
|
||||
* Set a default password for admin
|
||||
* Fix path for KISS clients
|
||||
* Added packet\_dupe\_timeout conf
|
||||
* Add ability to change path on every TX packet
|
||||
* Make Packet objects hashable
|
||||
* Bump urllib3 from 2.0.4 to 2.0.6
|
||||
* Don't process AckPackets as dupes
|
||||
* Fixed another msgNo int issue
|
||||
* Fixed issue with packet tracker and msgNO Counter
|
||||
* Fixed import of Mutablemapping
|
||||
* pep8 fixes
|
||||
* rewrote packet\_list and drop dupe packets
|
||||
* Log a warning on dupe
|
||||
* Fix for dupe packets
|
||||
|
||||
v3.2.0
|
||||
------
|
||||
|
||||
* Update Changelog for 3.2.0
|
||||
* minor cleanup prior to release
|
||||
* Webchat: fix input maxlength
|
||||
* WebChat: cleanup some console.logs
|
||||
* WebChat: flash a dupe message
|
||||
* Webchat: Fix issue accessing msg.id
|
||||
* Webchat: Fix chat css on older browsers
|
||||
* WebChat: new tab should get focus
|
||||
* Bump gevent from 23.9.0.post1 to 23.9.1
|
||||
* Webchat: Fix pep8 errors
|
||||
* Webchat: Added tab notifications and raw packet
|
||||
* WebChat: Prevent sending message without callsign
|
||||
* WebChat: fixed content area scrolling
|
||||
* Webchat: tweaks to UI for expanding chat
|
||||
* Webchat: Fixed bug deleteing first tab
|
||||
* Ensure Keepalive doesn't reset client at startup
|
||||
* Ensure parse\_delta\_str doesn't puke
|
||||
* WebChat: Send GPS Beacon working
|
||||
* webchat: got active tab onclick working
|
||||
* webchat: set to\_call to value of tab when selected
|
||||
* Center the webchat input form
|
||||
* Update index.html to use chat.css
|
||||
* Deleted webchat mobile pages
|
||||
* Added close X on webchat tabs
|
||||
* Reworked webchat with new UI
|
||||
* Updated the webchat UI to look like iMessage
|
||||
* Restore previous conversations in webchat
|
||||
* Remove VIM from Dockerfile
|
||||
* recreate client during reset()
|
||||
* updated github workflows
|
||||
* Updated documentation build
|
||||
* Removed admin\_web.py
|
||||
* Removed some RPC server log noise
|
||||
* Fixed admin page packet date
|
||||
* RPC Server logs the client IP on failed auth
|
||||
* Start keepalive thread first
|
||||
* fixed an issue in the mobile webchat
|
||||
* Added dupe checkig code to webchat mobile
|
||||
* click on the div after added
|
||||
* Webchat suppress to display of dupe messages
|
||||
* Convert webchat internet urls to local static urls
|
||||
* Make use of webchat gps config options
|
||||
* Added new webchat config section
|
||||
* fixed webchat logging.logformat typeoh
|
||||
|
||||
v3.1.3
|
||||
------
|
||||
|
||||
* prep for 3.1.3
|
||||
* Forcefully allow development webchat flask
|
||||
|
||||
v3.1.2
|
||||
------
|
||||
|
||||
* Updated Changelog for 3.1.2
|
||||
* Added support for ThirdParty packet types
|
||||
* Disable the Send GPS Beacon button
|
||||
* Removed adhoc ssl support in webchat
|
||||
|
||||
v3.1.1
|
||||
------
|
||||
|
||||
* Updated Changelog for v3.1.1
|
||||
* Fixed pep8 failures
|
||||
* re-enable USWeatherPlugin to use mapClick
|
||||
* Fix sending packets over KISS interface
|
||||
* Use config web\_ip for running admin ui from module
|
||||
* remove loop log
|
||||
* Max out the client reconnect backoff to 5
|
||||
* Update the Dockerfile
|
||||
|
||||
v3.1.0
|
||||
------
|
||||
|
||||
* Changelog updates for v3.1.0
|
||||
* Use CONF.admin.web\_port for single launch web admin
|
||||
* Fixed sio namespace registration
|
||||
* Update Dockerfile-dev to include uwsgi
|
||||
* Fixed pep8
|
||||
* change port to 8000
|
||||
* replacement of flask-socketio with python-socketio
|
||||
* Change how fetch-stats gets it's defaults
|
||||
* Ensure fetch-stats ip is a string
|
||||
* Add info logging for rpc server calls
|
||||
* updated wsgi config default /config/aprsd.conf
|
||||
* Added timing after each thread loop
|
||||
* Update docker bin/admin.sh
|
||||
* Removed flask-classful from webchat
|
||||
* Remove flask pinning
|
||||
* removed linux/arm/v8
|
||||
* Update master build to include linux/arm/v8
|
||||
* Update Dockerfile-dev to fix plugin permissions
|
||||
* update manual build github
|
||||
* Update requirements for upgraded cryptography
|
||||
* Added more libs for Dockerfile-dev
|
||||
* Replace Dockerfile-dev with python3 slim
|
||||
* Moved logging to log for wsgi.py
|
||||
* Changed weather plugin regex pattern
|
||||
* Limit the float values to 3 decimal places
|
||||
* Fixed rain numbers from aprslib
|
||||
* Fixed rpc client initialization
|
||||
* Fix in for aprslib issue #80
|
||||
* Try and fix Dockerfile-dev
|
||||
* Fixed pep8 errors
|
||||
* Populate stats object with threads info
|
||||
* added counts to the fetch-stats table
|
||||
* Added the fetch-stats command
|
||||
* Replace ratelimiter with rush
|
||||
* Added some utilities to Dockerfile-dev
|
||||
* add arm64 for manual github build
|
||||
* Added manual master build
|
||||
* Update master-build.yml
|
||||
* Add github manual trigger for master build
|
||||
* Fixed unit tests for Location plugin
|
||||
* USe new tox and update githubworkflows
|
||||
* Updated requirements
|
||||
* force tox to 4.3.5
|
||||
* Update github workflows
|
||||
* Fixed pep8 violation
|
||||
* Added rpc server for listen
|
||||
* Update location plugin and reworked requirements
|
||||
* Fixed .readthedocs.yaml format
|
||||
* Add .readthedocs.yaml
|
||||
* Example plugin wrong function
|
||||
* Ensure conf is imported for threads/tx
|
||||
* Update Dockerfile to help build cryptography
|
||||
|
||||
v3.0.3
|
||||
------
|
||||
|
||||
* Update Changelog to 3.0.3
|
||||
* cleanup some debug messages
|
||||
* Fixed loading of plugins for server
|
||||
* Don't load help plugin for listen command
|
||||
* Added listen args
|
||||
* Change listen command plugins
|
||||
* Added listen.sh for docker
|
||||
* Update Listen command
|
||||
* Update Dockerfile
|
||||
* Add ratelimiting for acks and other packets
|
||||
|
||||
v3.0.2
|
||||
------
|
||||
|
||||
* Update Changelog for 3.0.2
|
||||
* Import RejectPacket
|
||||
|
||||
v3.0.1
|
||||
------
|
||||
|
||||
* 3.0.1
|
||||
* Add support to Reject messages
|
||||
* Update Docker builds for 3.0.0
|
||||
|
||||
v3.0.0
|
||||
------
|
||||
|
||||
* Update Changelog for 3.0.0
|
||||
* Ensure server command main thread doesn't exit
|
||||
* Fixed save directory default
|
||||
* Fixed pep8 failure
|
||||
* Cleaned up KISS interfaces use of old config
|
||||
* reworked usage of importlib.metadata
|
||||
* Added new docs files for 3.0.0
|
||||
* Removed url option from healthcheck in dev
|
||||
* Updated Healthcheck to use rpc to call aprsd
|
||||
* Updated docker/bin/run.sh to use new conf
|
||||
* Added ObjectPacket
|
||||
* Update regex processing and regex for plugins
|
||||
* Change ordering of starting up of server command
|
||||
* Update documentation and README
|
||||
* Decouple admin web interface from server command
|
||||
* Dockerfile now produces aprsd.conf
|
||||
* Fix some unit tests and loading of CONF w/o file
|
||||
* Added missing conf
|
||||
* Removed references to old custom config
|
||||
* Convert config to oslo\_config
|
||||
* Added rain formatting unit tests to WeatherPacket
|
||||
* Fix Rain reporting in WeatherPacket send
|
||||
* Removed Packet.send()
|
||||
* Removed watchlist plugins
|
||||
* Fix PluginManager.get\_plugins
|
||||
* Cleaned up PluginManager
|
||||
* Cleaned up PluginManager
|
||||
* Update routing for weatherpacket
|
||||
* Fix some WeatherPacket formatting
|
||||
* Fix pep8 violation
|
||||
* Add packet filtering for aprsd listen
|
||||
* Added WeatherPacket encoding
|
||||
* Updated webchat and listen for queue based RX
|
||||
* reworked collecting and reporting stats
|
||||
* Removed unused threading code
|
||||
* Change RX packet processing to enqueu
|
||||
* Make tracking objectstores work w/o initializing
|
||||
* Cleaned up packet transmit class attributes
|
||||
* Fix packets timestamp to int
|
||||
* More messaging -> packets cleanup
|
||||
* Cleaned out all references to messaging
|
||||
* Added contructing a GPSPacket for sending
|
||||
* cleanup webchat
|
||||
* Reworked all packet processing
|
||||
* Updated plugins and plugin interfaces for Packet
|
||||
* Started using dataclasses to describe packets
|
||||
|
||||
v2.6.1
|
||||
------
|
||||
|
||||
* v2.6.1
|
||||
* Fixed position report for webchat beacon
|
||||
* Try and fix broken 32bit qemu builds on 64bit system
|
||||
* Add unit tests for webchat
|
||||
* remove armv7 build RUST sucks
|
||||
* Fix for Collections change in 3.10
|
||||
|
||||
v2.6.0
|
||||
------
|
||||
|
||||
* Update workflow again
|
||||
* Update Dockerfile to 22.04
|
||||
* Update Dockerfile and build.sh
|
||||
* Update workflow
|
||||
* Prep for 2.6.0 release
|
||||
* Update requirements
|
||||
* Removed Makefile comment
|
||||
* Update Makefile for dev vs. run environments
|
||||
* Added pyopenssl for https for webchat
|
||||
* change from device-detector to user-agents
|
||||
* Remove twine from dev-requirements
|
||||
* Update to latest Makefile.venv
|
||||
* Refactored threads a bit
|
||||
* Mark packets as acked in MsgTracker
|
||||
* remove dev setting for template
|
||||
* Add GPS beacon to mobile page
|
||||
* Allow werkzeug for admin interface
|
||||
* Allow werkzeug for admin interface
|
||||
* Add support for mobile browsers for webchat
|
||||
* Ignore callsign case while processing packets
|
||||
* remove linux/arm/v7 for official builds for now
|
||||
* added workflow for building specific version
|
||||
* Allow passing in version to the Dockerfile
|
||||
* Send GPS Beacon from webchat interface
|
||||
* specify Dockerfile-dev
|
||||
* Fixed build.sh
|
||||
* Build on the source not released aprsd
|
||||
* Remove email validation
|
||||
* Add support for building linux/arm/v7
|
||||
* Remove python 3.7 from docker build github
|
||||
* Fixed failing unit tests
|
||||
* change github workflow
|
||||
* Removed TimeOpenCageDataPlugin
|
||||
* Dump config with aprsd dev test-plugin
|
||||
* Updated requirements
|
||||
* Got webchat working with KISS tcp
|
||||
* Added click auto\_envvar\_prefix
|
||||
* Update aprsd thread base class to use queue
|
||||
* Update packets to use wrapt
|
||||
* Add remving existing requirements
|
||||
* Try sending raw APRSFrames to aioax25
|
||||
* Use new aprsd.callsign as the main callsign
|
||||
* Fixed access to threads refactor
|
||||
* Added webchat command
|
||||
* Moved log.py to logging
|
||||
* Moved trace.py to utils
|
||||
* Fixed pep8 errors
|
||||
* Refactored threads.py
|
||||
* Refactor utils to directory
|
||||
* remove arm build for now
|
||||
* Added rustc and cargo to Dockerfile
|
||||
* remove linux/arm/v6 from docker platform build
|
||||
* Only tag master build as master
|
||||
* Remove docker build from test
|
||||
* create master-build.yml
|
||||
* Added container build action
|
||||
* Update docs on using Docker
|
||||
* Update dev-requirements pip-tools
|
||||
* Fix typo in docker-compose.yml
|
||||
* Fix PyPI scraping
|
||||
* Allow web interface when running in Docker
|
||||
* Fix typo on exception
|
||||
* README formatting fixes
|
||||
* Bump dependencies to fix python 3.10
|
||||
* Fixed up config option checking for KISS
|
||||
* Fix logging issue with log messages
|
||||
* for 2.5.9
|
||||
|
||||
v2.5.9
|
||||
------
|
||||
|
||||
* FIX: logging exceptions
|
||||
* Updated build and run for rich lib
|
||||
* update build for 2.5.8
|
||||
|
||||
v2.5.8
|
||||
------
|
||||
|
||||
* For 2.5.8
|
||||
* Removed debug code
|
||||
* Updated list-plugins
|
||||
* Renamed virtualenv dir to .aprsd-venv
|
||||
* Added unit tests for dev test-plugin
|
||||
* Send Message command defaults to config
|
||||
|
||||
v2.5.7
|
||||
------
|
||||
|
||||
* Updated Changelog
|
||||
* Fixed an KISS config disabled issue
|
||||
* Fixed a bug with multiple notify plugins enabled
|
||||
* Unify the logging to file and stdout
|
||||
* Added new feature to list-plugins command
|
||||
* more README.rst cleanup
|
||||
* Updated README examples
|
||||
|
||||
v2.5.6
|
||||
------
|
||||
|
||||
* Changelog
|
||||
* Tightened up the packet logging
|
||||
* Added unit tests for USWeatherPlugin, USMetarPlugin
|
||||
* Added test\_location to test LocationPlugin
|
||||
* Updated pytest output
|
||||
* Added py39 to tox for tests
|
||||
* Added NotifyPlugin unit tests and more
|
||||
* Small cleanup on packet logging
|
||||
* Reduced the APRSIS connection reset to 2 minutes
|
||||
* Fixed the NotifyPlugin
|
||||
* Fixed some pep8 errors
|
||||
* Add tracing for dev command
|
||||
* Added python rich library based logging
|
||||
* Added LOG\_LEVEL env variable for the docker
|
||||
|
||||
v2.5.5
|
||||
------
|
||||
|
||||
* Update requirements to use aprslib 0.7.0
|
||||
* fixed the failure during loading for objectstore
|
||||
* updated docker build
|
||||
|
||||
v2.5.4
|
||||
------
|
||||
|
||||
* Updated Changelog
|
||||
* Fixed dev command missing initialization
|
||||
|
||||
v2.5.3
|
||||
------
|
||||
|
||||
* Fix admin logging tab
|
||||
|
||||
v2.5.2
|
||||
------
|
||||
|
||||
* Added new list-plugins command
|
||||
* Don't require check-version command to have a config
|
||||
* Healthcheck command doesn't need the aprsd.yml config
|
||||
* Fix test failures
|
||||
* Removed requirement for aprs.fi key
|
||||
* Updated Changelog
|
||||
|
||||
v2.5.1
|
||||
------
|
||||
|
||||
* Removed stock plugin
|
||||
* Removed the stock plugin
|
||||
|
||||
v2.5.0
|
||||
------
|
||||
|
||||
* Updated for v2.5.0
|
||||
* Updated Dockerfile's and build script for docker
|
||||
* Cleaned up some verbose output & colorized output
|
||||
* Reworked all the common arguments
|
||||
* Fixed test-plugin
|
||||
* Ensure common params are honored
|
||||
* pep8
|
||||
* Added healthcheck to the cmds
|
||||
* Removed the need for FROMCALL in dev test-plugin
|
||||
* Pep8 failures
|
||||
* Refactor the cli
|
||||
* Updated Changelog for 4.2.3
|
||||
* Fixed a problem with send-message command
|
||||
|
||||
v2.4.2
|
||||
------
|
||||
|
||||
* Updated Changelog
|
||||
* Be more careful picking data to/from disk
|
||||
* Updated Changelog
|
||||
|
||||
v2.4.1
|
||||
------
|
||||
|
||||
* Ensure plugins are last to be loaded
|
||||
* Fixed email connecting to smtp server
|
||||
|
||||
v2.4.0
|
||||
------
|
||||
|
||||
* Updated Changelog for 2.4.0 release
|
||||
* Converted MsgTrack to ObjectStoreMixin
|
||||
* Fixed unit tests
|
||||
* Make sure SeenList update has a from in packet
|
||||
* Ensure PacketList is initialized
|
||||
* Added SIGTERM to signal\_handler
|
||||
* Enable configuring where to save the objectstore data
|
||||
* PEP8 cleanup
|
||||
* Added objectstore Mixin
|
||||
* Added -num option to aprsd-dev test-plugin
|
||||
* Only call stop\_threads if it exists
|
||||
* Added new SeenList
|
||||
* Added plugin version to stats reporting
|
||||
* Added new HelpPlugin
|
||||
* Updated aprsd-dev to use config for logfile format
|
||||
* Updated build.sh
|
||||
* removed usage of config.check\_config\_option
|
||||
* Fixed send-message after config/client rework
|
||||
* Fixed issue with flask config
|
||||
* Added some server startup info logs
|
||||
* Increase email delay to +10
|
||||
* Updated dev to use plugin manager
|
||||
* Fixed notify plugins
|
||||
* Added new Config object
|
||||
* Fixed email plugin's use of globals
|
||||
* Refactored client classes
|
||||
* Refactor utils usage
|
||||
* 2.3.1 Changelog
|
||||
|
||||
v2.3.1
|
||||
------
|
||||
|
||||
* Fixed issue of aprs-is missing keepalive
|
||||
* Fixed packet processing issue with aprsd send-message
|
||||
|
||||
v2.3.0
|
||||
------
|
||||
|
||||
* Prep 2.3.0
|
||||
* Enable plugins to return message object
|
||||
* Added enabled flag for every plugin object
|
||||
* Ensure plugin threads are valid
|
||||
* Updated Dockerfile to use v2.3.0
|
||||
* Removed fixed size on logging queue
|
||||
* Added Logfile tab in Admin ui
|
||||
* Updated Makefile clean target
|
||||
* Added self creating Makefile help target
|
||||
* Update dev.py
|
||||
* Allow passing in aprsis\_client
|
||||
* Fixed a problem with the AVWX plugin not working
|
||||
* Remove some noisy trace in email plugin
|
||||
* Fixed issue at startup with notify plugin
|
||||
* Fixed email validation
|
||||
* Removed values from forms
|
||||
* Added send-message to the main admin UI
|
||||
* Updated requirements
|
||||
* Cleaned up some pep8 failures
|
||||
* Upgraded the send-message POC to use websockets
|
||||
* New Admin ui send message page working
|
||||
* Send Message via admin Web interface
|
||||
* Updated Admin UI to show KISS connections
|
||||
* Got TX/RX working with aioax25+direwolf over TCP
|
||||
* Rebased from master
|
||||
* Added the ability to use direwolf KISS socket
|
||||
* Update Dockerfile to use 2.2.1
|
||||
|
||||
v2.2.1
|
||||
------
|
||||
|
||||
* Update Changelog for 2.2.1
|
||||
* Silence some log noise
|
||||
|
||||
v2.2.0
|
||||
------
|
||||
|
||||
* Updated Changelog for v2.2.0
|
||||
* Updated overview image
|
||||
* Removed Black code style reference
|
||||
* Removed TXThread
|
||||
* Added days to uptime string formatting
|
||||
* Updated select timeouts
|
||||
* Rebase from master and run gray
|
||||
* Added tracking plugin processing
|
||||
* Added threads functions to APRSDPluginBase
|
||||
* Refactor Message processing and MORE
|
||||
* Use Gray instead of Black for code formatting
|
||||
* Updated tox.ini
|
||||
* Fixed LOG.debug issue in weather plugin
|
||||
* Updated slack channel link
|
||||
* Cleanup of the README.rst
|
||||
* Fixed aprsd-dev
|
||||
|
||||
v2.1.0
|
||||
------
|
||||
|
||||
* Prep for v2.1.0
|
||||
* Enable multiple replies for plugins
|
||||
* Put in a fix for aprslib parse exceptions
|
||||
* Fixed time plugin
|
||||
* Updated the charts Added the packets chart
|
||||
* Added showing symbol images to watch list
|
||||
|
||||
v2.0.0
|
||||
------
|
||||
|
||||
* Updated docs for 2.0.0
|
||||
* Reworked the notification threads and admin ui
|
||||
* Fixed small bug with packets get\_packet\_type
|
||||
* Updated overview images
|
||||
* Move version string output to top of log
|
||||
* Add new watchlist feature
|
||||
* Fixed the Ack thread not resending acks
|
||||
* reworked the admin ui to use semenatic ui more
|
||||
* Added messages count to admin messages list
|
||||
* Add admin UI tabs for charts, messages, config
|
||||
* Removed a noisy debug log
|
||||
* Dump out the config during startup
|
||||
* Added message counts for each plugin
|
||||
* Bump urllib3 from 1.26.4 to 1.26.5
|
||||
* Added aprsd version checking
|
||||
* Updated INSTALL.txt
|
||||
* Update my callsign
|
||||
* Update README.rst
|
||||
* Update README.rst
|
||||
* Bump urllib3 from 1.26.3 to 1.26.4
|
||||
* Prep for v1.6.1 release
|
||||
|
||||
v1.6.1
|
||||
------
|
||||
|
||||
* Removed debug log for KeepAlive thread
|
||||
* ignore Makefile.venv
|
||||
* Reworked Makefile to use Makefile.venv
|
||||
* Fixed version unit tests
|
||||
* Updated stats output for KeepAlive thread
|
||||
* Update Dockerfile-dev to work with startup
|
||||
* Force all the graphs to 0 minimum
|
||||
* Added email messages graphs
|
||||
* Reworked the stats dict output and healthcheck
|
||||
* Added callsign to the web index page
|
||||
* Added log config for flask and lnav config file
|
||||
* Added showing APRS-IS server to stats
|
||||
* Provide an initial datapoint on rendering index
|
||||
* Make the index page behind auth
|
||||
* Bump pygments from 2.7.3 to 2.7.4
|
||||
* Added acks with messages graphs
|
||||
* Updated web stats index to show messages and ram usage
|
||||
* Added aprsd web index page
|
||||
* Bump lxml from 4.6.2 to 4.6.3
|
||||
* Bump jinja2 from 2.11.2 to 2.11.3
|
||||
* Bump urllib3 from 1.26.2 to 1.26.3
|
||||
* Added log format and dateformat to config file
|
||||
* Added Dockerfile-dev and updated build.sh
|
||||
* Require python 3.7 and >
|
||||
* Added plugin live reload and StockPlugin
|
||||
* Updated Dockerfile and build.sh
|
||||
* Updated Dockerfile for multiplatform builds
|
||||
* Updated Dockerfile for multiplatform builds
|
||||
* Dockerfile: Make creation of /config quiet failure
|
||||
* Updated README docs
|
||||
|
||||
v1.6.0
|
||||
------
|
||||
|
||||
* 1.6.0 release prep
|
||||
* Updated path of run.sh for docker build
|
||||
* Moved docker related stuffs to docker dir
|
||||
* Removed some noisy debug log
|
||||
* Bump cryptography from 3.3.1 to 3.3.2
|
||||
* Wrap another server call with try except
|
||||
* Wrap all imap calls with try except blocks
|
||||
* Bump bleach from 3.2.1 to 3.3.0
|
||||
* EmailThread was exiting because of IMAP timeout, added exceptions for this
|
||||
* Added memory tracing in keeplive
|
||||
* Fixed tox pep8 failure for trace
|
||||
* Added tracing facility
|
||||
* Fixed email login issue
|
||||
* duplicate email messages from RF would generate usage response
|
||||
* Enable debug logging for smtp and imap
|
||||
* more debug around email thread
|
||||
* debug around EmailThread hanging or vanishing
|
||||
* Fixed resend email after config rework
|
||||
* Added flask messages web UI and basic auth
|
||||
* Fixed an issue with LocationPlugin
|
||||
* Cleaned up the KeepAlive output
|
||||
* updated .gitignore
|
||||
* Added healthcheck app
|
||||
* Add flask and flask\_classful reqs
|
||||
* Added Flask web thread and stats collection
|
||||
* First hack at flask
|
||||
* Allow email to be disabled
|
||||
* Reworked the config file and options
|
||||
* Updated documentation and config output
|
||||
* Fixed extracting lat/lon
|
||||
* Added openweathermap weather plugin
|
||||
* Added new time plugins
|
||||
* Fixed TimePlugin timezone issue
|
||||
* remove fortune white space
|
||||
* fix git with install.txt
|
||||
* change query char from ? to !
|
||||
* Updated readme to include readthedocs link
|
||||
* Added aprsd-dev plugin test cli and WxPlugin
|
||||
|
||||
v1.5.1
|
||||
------
|
||||
|
||||
* Updated Changelog for v1.5.1
|
||||
* Updated README to fix pypi page
|
||||
* Update INSTALL.txt
|
||||
|
||||
v1.5.0
|
||||
------
|
||||
|
||||
* Updated Changelog for v1.5.0 release
|
||||
* Fix tox tests
|
||||
* fix usage statement
|
||||
* Enabled some emailthread messages and added timestamp
|
||||
* Fixed main server client initialization
|
||||
* test plugin expect responses update to match query output
|
||||
* Fixed the queryPlugin unit test
|
||||
* Removed flask code
|
||||
* Changed default log level to INFO
|
||||
* fix plugin tests to expect new strings
|
||||
* fix query command syntax ?, ?3, ?d(elete), ?a(ll)
|
||||
* Fixed latitude reporting in locationPlugin
|
||||
* get rid of some debug noise from tracker and email delay
|
||||
* fixed sample-config double print
|
||||
* make sample config easier to interpret
|
||||
* Fixed comments
|
||||
* Added the ability to add comments to the config file
|
||||
* Updated docker run.sh script
|
||||
* Added --raw format for sending messages
|
||||
* Fixed --quiet option
|
||||
* Added send-message login checking and --no-ack
|
||||
* Added new config for aprs.fi API Key
|
||||
* Added a fix for failed logins to APRS-IS
|
||||
* Fixed unit test for fortune plugin
|
||||
* Fixed fortune plugin failures
|
||||
* getting out of git hell with client.py problems
|
||||
* Extend APRS.IS object to change login string
|
||||
* Extend APRS.IS object to change login string
|
||||
* expect different reply from query plugin
|
||||
* update query plugin to resend last N messages. syntax: ?rN
|
||||
* Added unit test for QueryPlugin
|
||||
* Updated MsgTrack restart\_delayed
|
||||
* refactor Plugin objects to plugins directory
|
||||
* Updated README with more workflow details
|
||||
* change query character syntax, don't reply that we're resending stuff
|
||||
* Added APRSD system diagram to docs
|
||||
* Disable MX record validation
|
||||
* Added some more badges to readme files
|
||||
* Updated build for docs tox -edocs
|
||||
* switch command characters for query plugin
|
||||
* Fix broken test
|
||||
* undo git disaster
|
||||
* swap Query command characters a bit
|
||||
* Added Sphinx based documentation
|
||||
* refactor Plugin objects to plugins directory
|
||||
* Updated Makefile
|
||||
* removed double-quote-string-fixer
|
||||
* Lots of fixes
|
||||
* Added more pre-commit hook tests
|
||||
* Fixed email shortcut lookup
|
||||
* Added Makefile for easy dev setup
|
||||
* Added Makefile for easy dev setup
|
||||
* Cleaned out old ack\_dict
|
||||
* add null reply for send\_email
|
||||
* Updated README with more workflow details
|
||||
* backout my patch that broke tox, trying to push to craiger-test branch
|
||||
* Fixed failures caused by last commit
|
||||
* don't tell radio emails were sent, ack is enuf
|
||||
* Updated README to include development env
|
||||
* Added pre-commit hooks
|
||||
* Update Changelog for v1.5.0
|
||||
* Added QueryPlugin resend all delayed msgs or Flush
|
||||
* Added QueryPlugin
|
||||
* Added support to save/load MsgTrack on exit/start
|
||||
* Creation of MsgTrack object and other stuff
|
||||
* Added FortunePlugin unit test
|
||||
* Added some plugin unit tests
|
||||
* reworked threading
|
||||
* Reworked messaging lib
|
||||
|
||||
v1.1.0
|
||||
------
|
||||
|
||||
* Refactored the main process\_packet method
|
||||
* Update README with version 1.1.0 related info
|
||||
* Added fix for an unknown packet type
|
||||
* Ensure fortune is installed
|
||||
* Updated docker-compose
|
||||
* Added Changelog
|
||||
* Fixed issue when RX ack
|
||||
* Updated the aprsd-slack-plugin required version
|
||||
* Updated README.rst
|
||||
* Fixed send-message with email command and others
|
||||
* Update .gitignore
|
||||
* Big patch
|
||||
* Major refactor
|
||||
* Updated the Dockerfile to use alpine
|
||||
|
||||
v1.0.1
|
||||
------
|
||||
|
||||
* Fix unknown characterset emails
|
||||
* Updated loggin timestamp to include []
|
||||
* Updated README with a TOC
|
||||
* Updates for building containers
|
||||
* Don't use the dirname for the plugin path search
|
||||
* Reworked Plugin loading
|
||||
* Updated README with development information
|
||||
* Fixed an issue with weather plugin
|
||||
|
||||
v1.0.0
|
||||
------
|
||||
|
||||
* Rewrote the README.md to README.rst
|
||||
* Fixed the usage string after plugins introduced
|
||||
* Created plugin.py for Command Plugins
|
||||
* Refactor networking and commands
|
||||
* get rid of some debug statements
|
||||
* yet another unicode problem, in resend\_email fixed
|
||||
* reset default email check delay to 60, fix a few comments
|
||||
* Update tox environment to fix formatting python errors
|
||||
* fixed fortune. yet another unicode issue, tested in py3 and py2
|
||||
* lose some logging statements
|
||||
* completely off urllib now, tested locate/weather in py2 and py3
|
||||
* add urllib import back until i replace all calls with requests
|
||||
* cleaned up weather code after switch to requests ... from urllib. works on py2 and py3
|
||||
* switch from urlib to requests for weather, tested in py3 and py2. still need to update locate, and all other http calls
|
||||
* imap tags are unicode in py3. .decode tags
|
||||
* Update INSTALL.txt
|
||||
* Initial conversion to click
|
||||
* Reconnect on socket timeout
|
||||
* clean up code around closed\_socket and reconnect
|
||||
* Update INSTALL.txt
|
||||
* Fixed all pep8 errors and some py3 errors
|
||||
* fix check\_email\_thread to do proper threading, take delay as arg
|
||||
* found another .decode that didn't include errors='ignore'
|
||||
* some failed attempts at getting the first txt or html from a multipart message, currently sends the last
|
||||
* fix parse\_email unicode probs by using body.decode(errors='ignore').. again
|
||||
* fix parse\_email unicode probs by using body.decode(errors='ignore')
|
||||
* clean up code around closed\_socket and reconnect
|
||||
* socket timeout 5 minutes
|
||||
* Detect closed socket, reconnect, with a bit more grace
|
||||
* can detect closed socket and reconnect now
|
||||
* Update INSTALL.txt
|
||||
* more debugging messages trying to find rare tight loop in main
|
||||
* Update INSTALL.txt
|
||||
* main loop went into tight loop, more debug prints
|
||||
* main loop went into tight loop, added debug print before every continue
|
||||
* Update INSTALL.txt
|
||||
* Update INSTALL.txt
|
||||
* George Carlin profanity filter
|
||||
* added decaying email check timer which resets with activity
|
||||
* Fixed all pep8 errors and some py3 errors
|
||||
* Fixed all pep8 errors and some py3 errors
|
||||
* Reconnect on socket timeout
|
||||
* socket reconnect on timeout testing
|
||||
* socket timeout of 300 instead of 60
|
||||
* Reconnect on socket timeout
|
||||
* socket reconnect on timeout testing
|
||||
* Fixed all pep8 errors and some py3 errors
|
||||
* fix check\_email\_thread to do proper threading, take delay as arg
|
||||
* INSTALL.txt for the average person
|
||||
* fix bugs after beautification and yaml config additions. Convert to sockets. case insensitive commands
|
||||
* fix INBOX
|
||||
* Update README.md
|
||||
* Added tox support
|
||||
* Fixed SMTP settings
|
||||
* Created fake\_aprs.py
|
||||
* select inbox if gmail server
|
||||
* removed ASS
|
||||
* Added a try block around imap login
|
||||
* Added port and fixed telnet user
|
||||
* Require ~/.aprsd/config.yml
|
||||
* updated README for install and usage instructions
|
||||
* added test to ensure shortcuts in config.yml
|
||||
* added exit if missing config file
|
||||
* Added reading of a config file
|
||||
* update readme
|
||||
* update readme
|
||||
* sanitize readme
|
||||
* readme again again
|
||||
* readme again again
|
||||
* readme again
|
||||
* readme
|
||||
* readme update
|
||||
* First stab at migrating this to a pytpi repo
|
||||
* First stab at migrating this to a pytpi repo
|
||||
* Added password, callsign and host
|
||||
* Added argparse for cli options
|
||||
* comments
|
||||
* Cleaned up trailing whitespace
|
||||
* add tweaked fuzzyclock
|
||||
* make tn a global
|
||||
* Added standard python main()
|
||||
* tweaks to readme
|
||||
* drop virtenv on first line
|
||||
* sanitize readme a bit more
|
||||
* sanitize readme a bit more
|
||||
* sanitize readme
|
||||
* added weather and location 3
|
||||
* added weather and location 2
|
||||
* added weather and location
|
||||
* mapme
|
||||
* de-localize
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* de-localize
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update aprsd.py
|
||||
* Add files via upload
|
||||
* Update README.md
|
||||
* Update aprsd.py
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Update README.md
|
||||
* Add files via upload
|
||||
* Initial commit
|
1194
ChangeLog.md
1194
ChangeLog.md
File diff suppressed because it is too large
Load Diff
24
Makefile
24
Makefile
@ -1,5 +1,5 @@
|
||||
WORKDIR?=.
|
||||
VENVDIR ?= $(WORKDIR)/.venv
|
||||
VENVDIR ?= $(WORKDIR)/.aprsd-venv
|
||||
|
||||
.DEFAULT_GOAL := help
|
||||
|
||||
@ -17,19 +17,14 @@ Makefile.venv:
|
||||
help: # Help for the Makefile
|
||||
@egrep -h '\s##\s' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'
|
||||
|
||||
dev: REQUIREMENTS_TXT = requirements.txt requirements-dev.txt
|
||||
dev: REQUIREMENTS_TXT = requirements.txt dev-requirements.txt
|
||||
dev: venv ## Create a python virtual environment for development of aprsd
|
||||
|
||||
run: venv ## Create a virtual environment for running aprsd commands
|
||||
|
||||
changelog: dev
|
||||
npm i -g auto-changelog
|
||||
auto-changelog -l false --sort-commits date -o ChangeLog.md
|
||||
|
||||
docs: changelog
|
||||
m2r --overwrite ChangeLog.md
|
||||
docs: dev
|
||||
cp README.rst docs/readme.rst
|
||||
mv ChangeLog.rst docs/changelog.rst
|
||||
cp Changelog docs/changelog.rst
|
||||
tox -edocs
|
||||
|
||||
clean: clean-build clean-pyc clean-test clean-dev ## remove all build, test, coverage and Python artifacts
|
||||
@ -44,6 +39,7 @@ clean-build: ## remove build artifacts
|
||||
clean-pyc: ## remove Python file artifacts
|
||||
find . -name '*.pyc' -exec rm -f {} +
|
||||
find . -name '*.pyo' -exec rm -f {} +
|
||||
find . -name '*~' -exec rm -f {} +
|
||||
find . -name '__pycache__' -exec rm -fr {} +
|
||||
|
||||
clean-test: ## remove test and coverage artifacts
|
||||
@ -59,9 +55,9 @@ clean-dev:
|
||||
test: dev ## Run all the tox tests
|
||||
tox -p all
|
||||
|
||||
build: test changelog ## Make the build artifact prior to doing an upload
|
||||
build: test ## Make the build artifact prior to doing an upload
|
||||
$(VENV)/pip install twine
|
||||
$(VENV)/python3 -m build
|
||||
$(VENV)/python3 setup.py sdist bdist_wheel
|
||||
$(VENV)/twine check dist/*
|
||||
|
||||
upload: build ## Upload a new version of the plugin
|
||||
@ -85,8 +81,8 @@ docker-dev: test ## Make a development docker container tagged with hemna6969/a
|
||||
|
||||
update-requirements: dev ## Update the requirements.txt and dev-requirements.txt files
|
||||
rm requirements.txt
|
||||
rm requirements-dev.txt
|
||||
rm dev-requirements.txt
|
||||
touch requirements.txt
|
||||
touch requirements-dev.txt
|
||||
touch dev-requirements.txt
|
||||
$(VENV)/pip-compile --resolver backtracking --annotation-style=line requirements.in
|
||||
$(VENV)/pip-compile --resolver backtracking --annotation-style=line requirements-dev.in
|
||||
$(VENV)/pip-compile --resolver backtracking --annotation-style=line dev-requirements.in
|
||||
|
96
README.rst
96
README.rst
@ -11,37 +11,6 @@ ____________________
|
||||
`APRSD <http://github.com/craigerl/aprsd>`_ is a Ham radio `APRS <http://aprs.org>`_ message command gateway built on python.
|
||||
|
||||
|
||||
Table of Contents
|
||||
=================
|
||||
|
||||
1. `What is APRSD <#what-is-aprsd>`_
|
||||
2. `APRSD Overview Diagram <#aprsd-overview-diagram>`_
|
||||
3. `Typical Use Case <#typical-use-case>`_
|
||||
4. `Installation <#installation>`_
|
||||
5. `Example Usage <#example-usage>`_
|
||||
6. `Help <#help>`_
|
||||
7. `Commands <#commands>`_
|
||||
- `Configuration <#configuration>`_
|
||||
- `Server <#server>`_
|
||||
- `Current List of Built-in Plugins <#current-list-of-built-in-plugins>`_
|
||||
- `Pypi.org APRSD Installable Plugin Packages <#pypiorg-aprsd-installable-plugin-packages>`_
|
||||
- `🐍 APRSD Installed 3rd Party Plugins <#aprsd-installed-3rd-party-plugins>`_
|
||||
- `Send Message <#send-message>`_
|
||||
- `Send Email (Radio to SMTP Server) <#send-email-radio-to-smtp-server>`_
|
||||
- `Receive Email (IMAP Server to Radio) <#receive-email-imap-server-to-radio>`_
|
||||
- `Location <#location>`_
|
||||
- `Web Admin Interface <#web-admin-interface>`_
|
||||
8. `Development <#development>`_
|
||||
- `Building Your Own APRSD Plugins <#building-your-own-aprsd-plugins>`_
|
||||
9. `Workflow <#workflow>`_
|
||||
10. `Release <#release>`_
|
||||
11. `Docker Container <#docker-container>`_
|
||||
- `Building <#building-1>`_
|
||||
- `Official Build <#official-build>`_
|
||||
- `Development Build <#development-build>`_
|
||||
- `Running the Container <#running-the-container>`_
|
||||
|
||||
|
||||
What is APRSD
|
||||
=============
|
||||
APRSD is a python application for interacting with the APRS network and providing
|
||||
@ -100,7 +69,6 @@ Help
|
||||
====
|
||||
::
|
||||
|
||||
|
||||
└─> aprsd -h
|
||||
Usage: aprsd [OPTIONS] COMMAND [ARGS]...
|
||||
|
||||
@ -109,19 +77,18 @@ Help
|
||||
-h, --help Show this message and exit.
|
||||
|
||||
Commands:
|
||||
check-version Check this version against the latest in pypi.org.
|
||||
completion Show the shell completion code
|
||||
dev Development type subcommands
|
||||
fetch-stats Fetch stats from a APRSD admin web interface.
|
||||
healthcheck Check the health of the running aprsd server.
|
||||
list-extensions List the built in plugins available to APRSD.
|
||||
list-plugins List the built in plugins available to APRSD.
|
||||
listen Listen to packets on the APRS-IS Network based on FILTER.
|
||||
sample-config Generate a sample Config file from aprsd and all...
|
||||
send-message Send a message to a callsign via APRS_IS.
|
||||
server Start the aprsd server gateway process.
|
||||
version Show the APRSD version.
|
||||
webchat Web based HAM Radio chat program!
|
||||
check-version Check this version against the latest in pypi.org.
|
||||
completion Click Completion subcommands
|
||||
dev Development type subcommands
|
||||
healthcheck Check the health of the running aprsd server.
|
||||
list-plugins List the built in plugins available to APRSD.
|
||||
listen Listen to packets on the APRS-IS Network based on FILTER.
|
||||
sample-config Generate a sample Config file from aprsd and all...
|
||||
send-message Send a message to a callsign via APRS_IS.
|
||||
server Start the aprsd server gateway process.
|
||||
version Show the APRSD version.
|
||||
webchat Web based HAM Radio chat program!
|
||||
|
||||
|
||||
|
||||
Commands
|
||||
@ -178,7 +145,8 @@ look for incomming commands to the callsign configured in the config file
|
||||
|
||||
|
||||
Current list of built-in plugins
|
||||
--------------------------------
|
||||
======================================
|
||||
|
||||
::
|
||||
|
||||
└─> aprsd list-plugins
|
||||
@ -330,21 +298,18 @@ AND... ping, fortune, time.....
|
||||
|
||||
Web Admin Interface
|
||||
===================
|
||||
APRSD has a web admin interface that allows you to view the status of the running APRSD server instance.
|
||||
The web admin interface shows graphs of packet counts, packet types, number of threads running, the latest
|
||||
packets sent and received, and the status of each of the plugins that are loaded. You can also view the logfile
|
||||
and view the raw APRSD configuration file.
|
||||
|
||||
To start the web admin interface, You have to install gunicorn in your virtualenv that already has aprsd installed.
|
||||
|
||||
::
|
||||
|
||||
source <path to APRSD's virtualenv>/bin/activate
|
||||
aprsd admin --loglevel INFO
|
||||
pip install gunicorn
|
||||
gunicorn --bind 0.0.0.0:8080 "aprsd.wsgi:app"
|
||||
|
||||
The web admin interface will be running on port 8080 on the local machine. http://localhost:8080
|
||||
|
||||
|
||||
|
||||
Development
|
||||
===========
|
||||
|
||||
@ -353,7 +318,7 @@ Development
|
||||
* ``make``
|
||||
|
||||
Workflow
|
||||
--------
|
||||
========
|
||||
|
||||
While working aprsd, The workflow is as follows:
|
||||
|
||||
@ -382,7 +347,7 @@ While working aprsd, The workflow is as follows:
|
||||
|
||||
|
||||
Release
|
||||
-------
|
||||
=======
|
||||
|
||||
To do release to pypi:
|
||||
|
||||
@ -403,29 +368,6 @@ To do release to pypi:
|
||||
``make upload``
|
||||
|
||||
|
||||
Building your own APRSD plugins
|
||||
-------------------------------
|
||||
|
||||
APRSD plugins are the mechanism by which APRSD can respond to APRS Messages. The plugins are loaded at server startup
|
||||
and can also be loaded at listen startup. When a packet is received by APRSD, it is passed to each of the plugins
|
||||
in the order they were registered in the config file. The plugins can then decide what to do with the packet.
|
||||
When a plugin is called, it is passed a APRSD Packet object. The plugin can then do something with the packet and
|
||||
return a reply message if desired. If a plugin does not want to reply to the packet, it can just return None.
|
||||
When a plugin does return a reply message, APRSD will send the reply message to the appropriate destination.
|
||||
|
||||
For example, when a 'ping' message is received, the PingPlugin will return a reply message of 'pong'. When APRSD
|
||||
receives the 'pong' message, it will be sent back to the original caller of the ping message.
|
||||
|
||||
APRSD plugins are simply python packages that can be installed from pypi.org. They are installed into the
|
||||
aprsd virtualenv and can be imported by APRSD at runtime. The plugins are registered in the config file and loaded
|
||||
at startup of the aprsd server command or the aprsd listen command.
|
||||
|
||||
Overview
|
||||
--------
|
||||
You can build your own plugins by following the instructions in the `Building your own APRSD plugins`_ section.
|
||||
|
||||
Plugins are called by APRSD when packe
|
||||
|
||||
Docker Container
|
||||
================
|
||||
|
||||
|
@ -10,10 +10,7 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from importlib.metadata import PackageNotFoundError, version
|
||||
import pbr.version
|
||||
|
||||
|
||||
try:
|
||||
__version__ = version("aprsd")
|
||||
except PackageNotFoundError:
|
||||
pass
|
||||
__version__ = pbr.version.VersionInfo("aprsd").version_string()
|
||||
|
@ -1,9 +1,8 @@
|
||||
import click
|
||||
from functools import update_wrapper
|
||||
import logging
|
||||
from pathlib import Path
|
||||
import typing as t
|
||||
|
||||
import click
|
||||
from oslo_config import cfg
|
||||
|
||||
import aprsd
|
||||
@ -59,7 +58,7 @@ class AliasedGroup(click.Group):
|
||||
Copied from `click` and extended for `aliases`.
|
||||
"""
|
||||
def decorator(f):
|
||||
aliases = kwargs.pop("aliases", [])
|
||||
aliases = kwargs.pop('aliases', [])
|
||||
cmd = click.decorators.command(*args, **kwargs)(f)
|
||||
self.add_command(cmd)
|
||||
for alias in aliases:
|
||||
@ -75,7 +74,7 @@ class AliasedGroup(click.Group):
|
||||
Copied from `click` and extended for `aliases`.
|
||||
"""
|
||||
def decorator(f):
|
||||
aliases = kwargs.pop("aliases", [])
|
||||
aliases = kwargs.pop('aliases', [])
|
||||
cmd = click.decorators.group(*args, **kwargs)(f)
|
||||
self.add_command(cmd)
|
||||
for alias in aliases:
|
||||
@ -138,7 +137,7 @@ def process_standard_options_no_config(f: F) -> F:
|
||||
ctx.obj["loglevel"] = kwargs["loglevel"]
|
||||
ctx.obj["config_file"] = kwargs["config_file"]
|
||||
ctx.obj["quiet"] = kwargs["quiet"]
|
||||
log.setup_logging(
|
||||
log.setup_logging_no_config(
|
||||
ctx.obj["loglevel"],
|
||||
ctx.obj["quiet"],
|
||||
)
|
||||
|
348
aprsd/client.py
Normal file
348
aprsd/client.py
Normal file
@ -0,0 +1,348 @@
|
||||
import abc
|
||||
import logging
|
||||
import time
|
||||
|
||||
import aprslib
|
||||
from aprslib.exceptions import LoginError
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd import exception
|
||||
from aprsd.clients import aprsis, fake, kiss
|
||||
from aprsd.packets import core, packet_list
|
||||
from aprsd.utils import trace
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
TRANSPORT_APRSIS = "aprsis"
|
||||
TRANSPORT_TCPKISS = "tcpkiss"
|
||||
TRANSPORT_SERIALKISS = "serialkiss"
|
||||
TRANSPORT_FAKE = "fake"
|
||||
|
||||
# Main must create this from the ClientFactory
|
||||
# object such that it's populated with the
|
||||
# Correct config
|
||||
factory = None
|
||||
|
||||
|
||||
class Client:
|
||||
"""Singleton client class that constructs the aprslib connection."""
|
||||
|
||||
_instance = None
|
||||
_client = None
|
||||
|
||||
connected = False
|
||||
server_string = None
|
||||
filter = None
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
"""This magic turns this into a singleton."""
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
# Put any initialization here.
|
||||
return cls._instance
|
||||
|
||||
def set_filter(self, filter):
|
||||
self.filter = filter
|
||||
if self._client:
|
||||
self._client.set_filter(filter)
|
||||
|
||||
@property
|
||||
def client(self):
|
||||
if not self._client:
|
||||
LOG.info("Creating APRS client")
|
||||
self._client = self.setup_connection()
|
||||
if self.filter:
|
||||
LOG.info("Creating APRS client filter")
|
||||
self._client.set_filter(self.filter)
|
||||
return self._client
|
||||
|
||||
def send(self, packet: core.Packet):
|
||||
packet_list.PacketList().tx(packet)
|
||||
self.client.send(packet)
|
||||
|
||||
def reset(self):
|
||||
"""Call this to force a rebuild/reconnect."""
|
||||
if self._client:
|
||||
del self._client
|
||||
else:
|
||||
LOG.warning("Client not initialized, nothing to reset.")
|
||||
|
||||
# Recreate the client
|
||||
LOG.info(f"Creating new client {self.client}")
|
||||
|
||||
@abc.abstractmethod
|
||||
def setup_connection(self):
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
@abc.abstractmethod
|
||||
def is_enabled():
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
@abc.abstractmethod
|
||||
def transport():
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def decode_packet(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
|
||||
class APRSISClient(Client):
|
||||
|
||||
_client = None
|
||||
|
||||
@staticmethod
|
||||
def is_enabled():
|
||||
# Defaults to True if the enabled flag is non existent
|
||||
try:
|
||||
return CONF.aprs_network.enabled
|
||||
except KeyError:
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def is_configured():
|
||||
if APRSISClient.is_enabled():
|
||||
# Ensure that the config vars are correctly set
|
||||
if not CONF.aprs_network.login:
|
||||
LOG.error("Config aprs_network.login not set.")
|
||||
raise exception.MissingConfigOptionException(
|
||||
"aprs_network.login is not set.",
|
||||
)
|
||||
if not CONF.aprs_network.password:
|
||||
LOG.error("Config aprs_network.password not set.")
|
||||
raise exception.MissingConfigOptionException(
|
||||
"aprs_network.password is not set.",
|
||||
)
|
||||
if not CONF.aprs_network.host:
|
||||
LOG.error("Config aprs_network.host not set.")
|
||||
raise exception.MissingConfigOptionException(
|
||||
"aprs_network.host is not set.",
|
||||
)
|
||||
|
||||
return True
|
||||
return True
|
||||
|
||||
def is_alive(self):
|
||||
if self._client:
|
||||
return self._client.is_alive()
|
||||
else:
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def transport():
|
||||
return TRANSPORT_APRSIS
|
||||
|
||||
def decode_packet(self, *args, **kwargs):
|
||||
"""APRS lib already decodes this."""
|
||||
return core.Packet.factory(args[0])
|
||||
|
||||
def setup_connection(self):
|
||||
user = CONF.aprs_network.login
|
||||
password = CONF.aprs_network.password
|
||||
host = CONF.aprs_network.host
|
||||
port = CONF.aprs_network.port
|
||||
connected = False
|
||||
backoff = 1
|
||||
aprs_client = None
|
||||
while not connected:
|
||||
try:
|
||||
LOG.info("Creating aprslib client")
|
||||
aprs_client = aprsis.Aprsdis(user, passwd=password, host=host, port=port)
|
||||
# Force the log to be the same
|
||||
aprs_client.logger = LOG
|
||||
aprs_client.connect()
|
||||
connected = True
|
||||
backoff = 1
|
||||
except LoginError as e:
|
||||
LOG.error(f"Failed to login to APRS-IS Server '{e}'")
|
||||
connected = False
|
||||
time.sleep(backoff)
|
||||
except Exception as e:
|
||||
LOG.error(f"Unable to connect to APRS-IS server. '{e}' ")
|
||||
connected = False
|
||||
time.sleep(backoff)
|
||||
# Don't allow the backoff to go to inifinity.
|
||||
if backoff > 5:
|
||||
backoff = 5
|
||||
else:
|
||||
backoff += 1
|
||||
continue
|
||||
LOG.debug(f"Logging in to APRS-IS with user '{user}'")
|
||||
self._client = aprs_client
|
||||
return aprs_client
|
||||
|
||||
|
||||
class KISSClient(Client):
|
||||
|
||||
_client = None
|
||||
|
||||
@staticmethod
|
||||
def is_enabled():
|
||||
"""Return if tcp or serial KISS is enabled."""
|
||||
if CONF.kiss_serial.enabled:
|
||||
return True
|
||||
|
||||
if CONF.kiss_tcp.enabled:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def is_configured():
|
||||
# Ensure that the config vars are correctly set
|
||||
if KISSClient.is_enabled():
|
||||
transport = KISSClient.transport()
|
||||
if transport == TRANSPORT_SERIALKISS:
|
||||
if not CONF.kiss_serial.device:
|
||||
LOG.error("KISS serial enabled, but no device is set.")
|
||||
raise exception.MissingConfigOptionException(
|
||||
"kiss_serial.device is not set.",
|
||||
)
|
||||
elif transport == TRANSPORT_TCPKISS:
|
||||
if not CONF.kiss_tcp.host:
|
||||
LOG.error("KISS TCP enabled, but no host is set.")
|
||||
raise exception.MissingConfigOptionException(
|
||||
"kiss_tcp.host is not set.",
|
||||
)
|
||||
|
||||
return True
|
||||
return False
|
||||
|
||||
def is_alive(self):
|
||||
if self._client:
|
||||
return self._client.is_alive()
|
||||
else:
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def transport():
|
||||
if CONF.kiss_serial.enabled:
|
||||
return TRANSPORT_SERIALKISS
|
||||
|
||||
if CONF.kiss_tcp.enabled:
|
||||
return TRANSPORT_TCPKISS
|
||||
|
||||
def decode_packet(self, *args, **kwargs):
|
||||
"""We get a frame, which has to be decoded."""
|
||||
LOG.debug(f"kwargs {kwargs}")
|
||||
frame = kwargs["frame"]
|
||||
LOG.debug(f"Got an APRS Frame '{frame}'")
|
||||
# try and nuke the * from the fromcall sign.
|
||||
# frame.header._source._ch = False
|
||||
# payload = str(frame.payload.decode())
|
||||
# msg = f"{str(frame.header)}:{payload}"
|
||||
# msg = frame.tnc2
|
||||
# LOG.debug(f"Decoding {msg}")
|
||||
|
||||
raw = aprslib.parse(str(frame))
|
||||
packet = core.Packet.factory(raw)
|
||||
if isinstance(packet, core.ThirdParty):
|
||||
return packet.subpacket
|
||||
else:
|
||||
return packet
|
||||
|
||||
def setup_connection(self):
|
||||
self._client = kiss.KISS3Client()
|
||||
return self._client
|
||||
|
||||
|
||||
class APRSDFakeClient(Client, metaclass=trace.TraceWrapperMetaclass):
|
||||
|
||||
@staticmethod
|
||||
def is_enabled():
|
||||
if CONF.fake_client.enabled:
|
||||
return True
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def is_configured():
|
||||
return APRSDFakeClient.is_enabled()
|
||||
|
||||
def is_alive(self):
|
||||
return True
|
||||
|
||||
def setup_connection(self):
|
||||
return fake.APRSDFakeClient()
|
||||
|
||||
@staticmethod
|
||||
def transport():
|
||||
return TRANSPORT_FAKE
|
||||
|
||||
def decode_packet(self, *args, **kwargs):
|
||||
LOG.debug(f"kwargs {kwargs}")
|
||||
pkt = kwargs["packet"]
|
||||
LOG.debug(f"Got an APRS Fake Packet '{pkt}'")
|
||||
return pkt
|
||||
|
||||
|
||||
class ClientFactory:
|
||||
_instance = None
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
"""This magic turns this into a singleton."""
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
# Put any initialization here.
|
||||
return cls._instance
|
||||
|
||||
def __init__(self):
|
||||
self._builders = {}
|
||||
|
||||
def register(self, key, builder):
|
||||
self._builders[key] = builder
|
||||
|
||||
def create(self, key=None):
|
||||
if not key:
|
||||
if APRSISClient.is_enabled():
|
||||
key = TRANSPORT_APRSIS
|
||||
elif KISSClient.is_enabled():
|
||||
key = KISSClient.transport()
|
||||
elif APRSDFakeClient.is_enabled():
|
||||
key = TRANSPORT_FAKE
|
||||
|
||||
builder = self._builders.get(key)
|
||||
LOG.debug(f"Creating client {key}")
|
||||
if not builder:
|
||||
raise ValueError(key)
|
||||
return builder()
|
||||
|
||||
def is_client_enabled(self):
|
||||
"""Make sure at least one client is enabled."""
|
||||
enabled = False
|
||||
for key in self._builders.keys():
|
||||
try:
|
||||
enabled |= self._builders[key].is_enabled()
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
return enabled
|
||||
|
||||
def is_client_configured(self):
|
||||
enabled = False
|
||||
for key in self._builders.keys():
|
||||
try:
|
||||
enabled |= self._builders[key].is_configured()
|
||||
except KeyError:
|
||||
pass
|
||||
except exception.MissingConfigOptionException as ex:
|
||||
LOG.error(ex.message)
|
||||
return False
|
||||
except exception.ConfigOptionBogusDefaultException as ex:
|
||||
LOG.error(ex.message)
|
||||
return False
|
||||
|
||||
return enabled
|
||||
|
||||
@staticmethod
|
||||
def setup():
|
||||
"""Create and register all possible client objects."""
|
||||
global factory
|
||||
|
||||
factory = ClientFactory()
|
||||
factory.register(TRANSPORT_APRSIS, APRSISClient)
|
||||
factory.register(TRANSPORT_TCPKISS, KISSClient)
|
||||
factory.register(TRANSPORT_SERIALKISS, KISSClient)
|
||||
factory.register(TRANSPORT_FAKE, APRSDFakeClient)
|
@ -1,13 +0,0 @@
|
||||
from aprsd.client import aprsis, factory, fake, kiss
|
||||
|
||||
|
||||
TRANSPORT_APRSIS = "aprsis"
|
||||
TRANSPORT_TCPKISS = "tcpkiss"
|
||||
TRANSPORT_SERIALKISS = "serialkiss"
|
||||
TRANSPORT_FAKE = "fake"
|
||||
|
||||
|
||||
client_factory = factory.ClientFactory()
|
||||
client_factory.register(aprsis.APRSISClient)
|
||||
client_factory.register(kiss.KISSClient)
|
||||
client_factory.register(fake.APRSDFakeClient)
|
@ -1,135 +0,0 @@
|
||||
import datetime
|
||||
import logging
|
||||
import time
|
||||
|
||||
from aprslib.exceptions import LoginError
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd import client, exception
|
||||
from aprsd.client import base
|
||||
from aprsd.client.drivers import aprsis
|
||||
from aprsd.packets import core
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
class APRSISClient(base.APRSClient):
|
||||
|
||||
_client = None
|
||||
|
||||
def __init__(self):
|
||||
max_timeout = {"hours": 0.0, "minutes": 2, "seconds": 0}
|
||||
self.max_delta = datetime.timedelta(**max_timeout)
|
||||
|
||||
def stats(self) -> dict:
|
||||
stats = {}
|
||||
if self.is_configured():
|
||||
stats = {
|
||||
"server_string": self._client.server_string,
|
||||
"sever_keepalive": self._client.aprsd_keepalive,
|
||||
"filter": self.filter,
|
||||
}
|
||||
|
||||
return stats
|
||||
|
||||
@staticmethod
|
||||
def is_enabled():
|
||||
# Defaults to True if the enabled flag is non existent
|
||||
try:
|
||||
return CONF.aprs_network.enabled
|
||||
except KeyError:
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def is_configured():
|
||||
if APRSISClient.is_enabled():
|
||||
# Ensure that the config vars are correctly set
|
||||
if not CONF.aprs_network.login:
|
||||
LOG.error("Config aprs_network.login not set.")
|
||||
raise exception.MissingConfigOptionException(
|
||||
"aprs_network.login is not set.",
|
||||
)
|
||||
if not CONF.aprs_network.password:
|
||||
LOG.error("Config aprs_network.password not set.")
|
||||
raise exception.MissingConfigOptionException(
|
||||
"aprs_network.password is not set.",
|
||||
)
|
||||
if not CONF.aprs_network.host:
|
||||
LOG.error("Config aprs_network.host not set.")
|
||||
raise exception.MissingConfigOptionException(
|
||||
"aprs_network.host is not set.",
|
||||
)
|
||||
|
||||
return True
|
||||
return True
|
||||
|
||||
def _is_stale_connection(self):
|
||||
delta = datetime.datetime.now() - self._client.aprsd_keepalive
|
||||
if delta > self.max_delta:
|
||||
LOG.error(f"Connection is stale, last heard {delta} ago.")
|
||||
return True
|
||||
|
||||
def is_alive(self):
|
||||
if self._client:
|
||||
return self._client.is_alive() and not self._is_stale_connection()
|
||||
else:
|
||||
LOG.warning(f"APRS_CLIENT {self._client} alive? NO!!!")
|
||||
return False
|
||||
|
||||
def close(self):
|
||||
if self._client:
|
||||
self._client.stop()
|
||||
self._client.close()
|
||||
|
||||
@staticmethod
|
||||
def transport():
|
||||
return client.TRANSPORT_APRSIS
|
||||
|
||||
def decode_packet(self, *args, **kwargs):
|
||||
"""APRS lib already decodes this."""
|
||||
return core.factory(args[0])
|
||||
|
||||
def setup_connection(self):
|
||||
user = CONF.aprs_network.login
|
||||
password = CONF.aprs_network.password
|
||||
host = CONF.aprs_network.host
|
||||
port = CONF.aprs_network.port
|
||||
self.connected = False
|
||||
backoff = 1
|
||||
aprs_client = None
|
||||
while not self.connected:
|
||||
try:
|
||||
LOG.info(f"Creating aprslib client({host}:{port}) and logging in {user}.")
|
||||
aprs_client = aprsis.Aprsdis(user, passwd=password, host=host, port=port)
|
||||
# Force the log to be the same
|
||||
aprs_client.logger = LOG
|
||||
aprs_client.connect()
|
||||
self.connected = True
|
||||
backoff = 1
|
||||
except LoginError as e:
|
||||
LOG.error(f"Failed to login to APRS-IS Server '{e}'")
|
||||
self.connected = False
|
||||
time.sleep(backoff)
|
||||
except Exception as e:
|
||||
LOG.error(f"Unable to connect to APRS-IS server. '{e}' ")
|
||||
self.connected = False
|
||||
time.sleep(backoff)
|
||||
# Don't allow the backoff to go to inifinity.
|
||||
if backoff > 5:
|
||||
backoff = 5
|
||||
else:
|
||||
backoff += 1
|
||||
continue
|
||||
self._client = aprs_client
|
||||
return aprs_client
|
||||
|
||||
def consumer(self, callback, blocking=False, immortal=False, raw=False):
|
||||
try:
|
||||
self._client.consumer(
|
||||
callback, blocking=blocking,
|
||||
immortal=immortal, raw=raw,
|
||||
)
|
||||
except Exception as e:
|
||||
LOG.error(f"Exception in consumer: {e}")
|
@ -1,126 +0,0 @@
|
||||
import abc
|
||||
import logging
|
||||
import threading
|
||||
|
||||
from oslo_config import cfg
|
||||
import wrapt
|
||||
|
||||
from aprsd.packets import core
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
class APRSClient:
|
||||
"""Singleton client class that constructs the aprslib connection."""
|
||||
|
||||
_instance = None
|
||||
_client = None
|
||||
|
||||
connected = False
|
||||
filter = None
|
||||
lock = threading.Lock()
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
"""This magic turns this into a singleton."""
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
# Put any initialization here.
|
||||
cls._instance._create_client()
|
||||
return cls._instance
|
||||
|
||||
@abc.abstractmethod
|
||||
def stats(self) -> dict:
|
||||
"""Return statistics about the client connection.
|
||||
|
||||
Returns:
|
||||
dict: Statistics about the connection and packet handling
|
||||
"""
|
||||
|
||||
def set_filter(self, filter):
|
||||
self.filter = filter
|
||||
if self._client:
|
||||
self._client.set_filter(filter)
|
||||
|
||||
@property
|
||||
def client(self):
|
||||
if not self._client:
|
||||
self._create_client()
|
||||
return self._client
|
||||
|
||||
def _create_client(self):
|
||||
try:
|
||||
self._client = self.setup_connection()
|
||||
if self.filter:
|
||||
LOG.info("Creating APRS client filter")
|
||||
self._client.set_filter(self.filter)
|
||||
except Exception as e:
|
||||
LOG.error(f"Failed to create APRS client: {e}")
|
||||
self._client = None
|
||||
raise
|
||||
|
||||
def stop(self):
|
||||
if self._client:
|
||||
LOG.info("Stopping client connection.")
|
||||
self._client.stop()
|
||||
|
||||
def send(self, packet: core.Packet) -> None:
|
||||
"""Send a packet to the network.
|
||||
|
||||
Args:
|
||||
packet: The APRS packet to send
|
||||
"""
|
||||
self.client.send(packet)
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def reset(self) -> None:
|
||||
"""Call this to force a rebuild/reconnect."""
|
||||
LOG.info("Resetting client connection.")
|
||||
if self._client:
|
||||
self._client.close()
|
||||
del self._client
|
||||
self._create_client()
|
||||
else:
|
||||
LOG.warning("Client not initialized, nothing to reset.")
|
||||
|
||||
# Recreate the client
|
||||
LOG.info(f"Creating new client {self.client}")
|
||||
|
||||
@abc.abstractmethod
|
||||
def setup_connection(self):
|
||||
"""Initialize and return the underlying APRS connection.
|
||||
|
||||
Returns:
|
||||
object: The initialized connection object
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
@abc.abstractmethod
|
||||
def is_enabled():
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
@abc.abstractmethod
|
||||
def transport():
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def decode_packet(self, *args, **kwargs):
|
||||
"""Decode raw APRS packet data into a Packet object.
|
||||
|
||||
Returns:
|
||||
Packet: Decoded APRS packet
|
||||
"""
|
||||
|
||||
@abc.abstractmethod
|
||||
def consumer(self, callback, blocking=False, immortal=False, raw=False):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def is_alive(self):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def close(self):
|
||||
pass
|
@ -1,88 +0,0 @@
|
||||
import logging
|
||||
from typing import Callable, Protocol, runtime_checkable
|
||||
|
||||
from aprsd import exception
|
||||
from aprsd.packets import core
|
||||
|
||||
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class Client(Protocol):
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def connect(self) -> bool:
|
||||
pass
|
||||
|
||||
def disconnect(self) -> bool:
|
||||
pass
|
||||
|
||||
def decode_packet(self, *args, **kwargs) -> type[core.Packet]:
|
||||
pass
|
||||
|
||||
def is_enabled(self) -> bool:
|
||||
pass
|
||||
|
||||
def is_configured(self) -> bool:
|
||||
pass
|
||||
|
||||
def transport(self) -> str:
|
||||
pass
|
||||
|
||||
def send(self, message: str) -> bool:
|
||||
pass
|
||||
|
||||
def setup_connection(self) -> None:
|
||||
pass
|
||||
|
||||
|
||||
class ClientFactory:
|
||||
_instance = None
|
||||
clients = []
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
"""This magic turns this into a singleton."""
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
# Put any initialization here.
|
||||
return cls._instance
|
||||
|
||||
def __init__(self):
|
||||
self.clients: list[Callable] = []
|
||||
|
||||
def register(self, aprsd_client: Callable):
|
||||
if isinstance(aprsd_client, Client):
|
||||
raise ValueError("Client must be a subclass of Client protocol")
|
||||
|
||||
self.clients.append(aprsd_client)
|
||||
|
||||
def create(self, key=None):
|
||||
for client in self.clients:
|
||||
if client.is_enabled():
|
||||
return client()
|
||||
raise Exception("No client is configured!!")
|
||||
|
||||
def is_client_enabled(self):
|
||||
"""Make sure at least one client is enabled."""
|
||||
enabled = False
|
||||
for client in self.clients:
|
||||
if client.is_enabled():
|
||||
enabled = True
|
||||
return enabled
|
||||
|
||||
def is_client_configured(self):
|
||||
enabled = False
|
||||
for client in self.clients:
|
||||
try:
|
||||
if client.is_configured():
|
||||
enabled = True
|
||||
except exception.MissingConfigOptionException as ex:
|
||||
LOG.error(ex.message)
|
||||
return False
|
||||
except exception.ConfigOptionBogusDefaultException as ex:
|
||||
LOG.error(ex.message)
|
||||
return False
|
||||
return enabled
|
@ -1,48 +0,0 @@
|
||||
import logging
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd import client
|
||||
from aprsd.client import base
|
||||
from aprsd.client.drivers import fake as fake_driver
|
||||
from aprsd.utils import trace
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
class APRSDFakeClient(base.APRSClient, metaclass=trace.TraceWrapperMetaclass):
|
||||
|
||||
def stats(self) -> dict:
|
||||
return {}
|
||||
|
||||
@staticmethod
|
||||
def is_enabled():
|
||||
if CONF.fake_client.enabled:
|
||||
return True
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def is_configured():
|
||||
return APRSDFakeClient.is_enabled()
|
||||
|
||||
def is_alive(self):
|
||||
return True
|
||||
|
||||
def close(self):
|
||||
pass
|
||||
|
||||
def setup_connection(self):
|
||||
self.connected = True
|
||||
return fake_driver.APRSDFakeClient()
|
||||
|
||||
@staticmethod
|
||||
def transport():
|
||||
return client.TRANSPORT_FAKE
|
||||
|
||||
def decode_packet(self, *args, **kwargs):
|
||||
LOG.debug(f"kwargs {kwargs}")
|
||||
pkt = kwargs["packet"]
|
||||
LOG.debug(f"Got an APRS Fake Packet '{pkt}'")
|
||||
return pkt
|
@ -1,103 +0,0 @@
|
||||
import logging
|
||||
|
||||
import aprslib
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd import client, exception
|
||||
from aprsd.client import base
|
||||
from aprsd.client.drivers import kiss
|
||||
from aprsd.packets import core
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
class KISSClient(base.APRSClient):
|
||||
|
||||
_client = None
|
||||
|
||||
def stats(self) -> dict:
|
||||
stats = {}
|
||||
if self.is_configured():
|
||||
return {
|
||||
"transport": self.transport(),
|
||||
}
|
||||
return stats
|
||||
|
||||
@staticmethod
|
||||
def is_enabled():
|
||||
"""Return if tcp or serial KISS is enabled."""
|
||||
if CONF.kiss_serial.enabled:
|
||||
return True
|
||||
|
||||
if CONF.kiss_tcp.enabled:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def is_configured():
|
||||
# Ensure that the config vars are correctly set
|
||||
if KISSClient.is_enabled():
|
||||
transport = KISSClient.transport()
|
||||
if transport == client.TRANSPORT_SERIALKISS:
|
||||
if not CONF.kiss_serial.device:
|
||||
LOG.error("KISS serial enabled, but no device is set.")
|
||||
raise exception.MissingConfigOptionException(
|
||||
"kiss_serial.device is not set.",
|
||||
)
|
||||
elif transport == client.TRANSPORT_TCPKISS:
|
||||
if not CONF.kiss_tcp.host:
|
||||
LOG.error("KISS TCP enabled, but no host is set.")
|
||||
raise exception.MissingConfigOptionException(
|
||||
"kiss_tcp.host is not set.",
|
||||
)
|
||||
|
||||
return True
|
||||
return False
|
||||
|
||||
def is_alive(self):
|
||||
if self._client:
|
||||
return self._client.is_alive()
|
||||
else:
|
||||
return False
|
||||
|
||||
def close(self):
|
||||
if self._client:
|
||||
self._client.stop()
|
||||
|
||||
@staticmethod
|
||||
def transport():
|
||||
if CONF.kiss_serial.enabled:
|
||||
return client.TRANSPORT_SERIALKISS
|
||||
|
||||
if CONF.kiss_tcp.enabled:
|
||||
return client.TRANSPORT_TCPKISS
|
||||
|
||||
def decode_packet(self, *args, **kwargs):
|
||||
"""We get a frame, which has to be decoded."""
|
||||
LOG.debug(f"kwargs {kwargs}")
|
||||
frame = kwargs["frame"]
|
||||
LOG.debug(f"Got an APRS Frame '{frame}'")
|
||||
# try and nuke the * from the fromcall sign.
|
||||
# frame.header._source._ch = False
|
||||
# payload = str(frame.payload.decode())
|
||||
# msg = f"{str(frame.header)}:{payload}"
|
||||
# msg = frame.tnc2
|
||||
# LOG.debug(f"Decoding {msg}")
|
||||
|
||||
raw = aprslib.parse(str(frame))
|
||||
packet = core.factory(raw)
|
||||
if isinstance(packet, core.ThirdPartyPacket):
|
||||
return packet.subpacket
|
||||
else:
|
||||
return packet
|
||||
|
||||
def setup_connection(self):
|
||||
self._client = kiss.KISS3Client()
|
||||
self.connected = True
|
||||
return self._client
|
||||
|
||||
def consumer(self, callback, blocking=False, immortal=False, raw=False):
|
||||
self._client.consumer(callback)
|
@ -1,38 +0,0 @@
|
||||
import threading
|
||||
|
||||
from oslo_config import cfg
|
||||
import wrapt
|
||||
|
||||
from aprsd import client
|
||||
from aprsd.utils import singleton
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
@singleton
|
||||
class APRSClientStats:
|
||||
|
||||
lock = threading.Lock()
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def stats(self, serializable=False):
|
||||
cl = client.client_factory.create()
|
||||
stats = {
|
||||
"transport": cl.transport(),
|
||||
"filter": cl.filter,
|
||||
"connected": cl.connected,
|
||||
}
|
||||
|
||||
if cl.transport() == client.TRANSPORT_APRSIS:
|
||||
stats["server_string"] = cl.client.server_string
|
||||
keepalive = cl.client.aprsd_keepalive
|
||||
if serializable:
|
||||
keepalive = keepalive.isoformat()
|
||||
stats["server_keepalive"] = keepalive
|
||||
elif cl.transport() == client.TRANSPORT_TCPKISS:
|
||||
stats["host"] = CONF.kiss_tcp.host
|
||||
stats["port"] = CONF.kiss_tcp.port
|
||||
elif cl.transport() == client.TRANSPORT_SERIALKISS:
|
||||
stats["device"] = CONF.kiss_serial.device
|
||||
return stats
|
@ -1,4 +1,3 @@
|
||||
import datetime
|
||||
import logging
|
||||
import select
|
||||
import threading
|
||||
@ -12,6 +11,7 @@ from aprslib.exceptions import (
|
||||
import wrapt
|
||||
|
||||
import aprsd
|
||||
from aprsd import stats
|
||||
from aprsd.packets import core
|
||||
|
||||
|
||||
@ -24,20 +24,13 @@ class Aprsdis(aprslib.IS):
|
||||
# flag to tell us to stop
|
||||
thread_stop = False
|
||||
|
||||
# date for last time we heard from the server
|
||||
aprsd_keepalive = datetime.datetime.now()
|
||||
|
||||
# timeout in seconds
|
||||
select_timeout = 1
|
||||
lock = threading.Lock()
|
||||
|
||||
def stop(self):
|
||||
self.thread_stop = True
|
||||
LOG.warning("Shutdown Aprsdis client.")
|
||||
|
||||
def close(self):
|
||||
LOG.warning("Closing Aprsdis client.")
|
||||
super().close()
|
||||
LOG.info("Shutdown Aprsdis client.")
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def send(self, packet: core.Packet):
|
||||
@ -149,6 +142,7 @@ class Aprsdis(aprslib.IS):
|
||||
|
||||
self.logger.info(f"Connected to {server_string}")
|
||||
self.server_string = server_string
|
||||
stats.APRSDStats().set_aprsis_server(server_string)
|
||||
|
||||
except LoginError as e:
|
||||
self.logger.error(str(e))
|
||||
@ -182,25 +176,24 @@ class Aprsdis(aprslib.IS):
|
||||
try:
|
||||
for line in self._socket_readlines(blocking):
|
||||
if line[0:1] != b"#":
|
||||
self.aprsd_keepalive = datetime.datetime.now()
|
||||
if raw:
|
||||
callback(line)
|
||||
else:
|
||||
callback(self._parse(line))
|
||||
else:
|
||||
self.logger.debug("Server: %s", line.decode("utf8"))
|
||||
self.aprsd_keepalive = datetime.datetime.now()
|
||||
stats.APRSDStats().set_aprsis_keepalive()
|
||||
except ParseError as exp:
|
||||
self.logger.log(
|
||||
11,
|
||||
"%s Packet: '%s'",
|
||||
"%s\n Packet: %s",
|
||||
exp,
|
||||
exp.packet,
|
||||
)
|
||||
except UnknownFormat as exp:
|
||||
self.logger.log(
|
||||
9,
|
||||
"%s Packet: '%s'",
|
||||
"%s\n Packet: %s",
|
||||
exp,
|
||||
exp.packet,
|
||||
)
|
@ -67,7 +67,7 @@ class APRSDFakeClient(metaclass=trace.TraceWrapperMetaclass):
|
||||
# Generate packets here?
|
||||
raw = "GTOWN>APDW16,WIDE1-1,WIDE2-1:}KM6LYW-9>APZ100,TCPIP,GTOWN*::KM6LYW :KM6LYW: 19 Miles SW"
|
||||
pkt_raw = aprslib.parse(raw)
|
||||
pkt = core.factory(pkt_raw)
|
||||
pkt = core.Packet.factory(pkt_raw)
|
||||
callback(packet=pkt)
|
||||
LOG.debug(f"END blocking FAKE consumer {self}")
|
||||
time.sleep(8)
|
@ -81,7 +81,7 @@ class KISS3Client:
|
||||
LOG.error("Failed to parse bytes received from KISS interface.")
|
||||
LOG.exception(ex)
|
||||
|
||||
def consumer(self, callback):
|
||||
def consumer(self, callback, blocking=False, immortal=False, raw=False):
|
||||
LOG.debug("Start blocking KISS consumer")
|
||||
self._parse_callback = callback
|
||||
self.kiss.read(callback=self.parse_frame, min_frames=None)
|
@ -1,57 +0,0 @@
|
||||
import logging
|
||||
import os
|
||||
import signal
|
||||
|
||||
import click
|
||||
from oslo_config import cfg
|
||||
import socketio
|
||||
|
||||
import aprsd
|
||||
from aprsd import cli_helper
|
||||
from aprsd import main as aprsd_main
|
||||
from aprsd import utils
|
||||
from aprsd.main import cli
|
||||
|
||||
|
||||
os.environ["APRSD_ADMIN_COMMAND"] = "1"
|
||||
# this import has to happen AFTER we set the
|
||||
# above environment variable, so that the code
|
||||
# inside the wsgi.py has the value
|
||||
from aprsd import wsgi as aprsd_wsgi # noqa
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
# main() ###
|
||||
@cli.command()
|
||||
@cli_helper.add_options(cli_helper.common_options)
|
||||
@click.pass_context
|
||||
@cli_helper.process_standard_options
|
||||
def admin(ctx):
|
||||
"""Start the aprsd admin interface."""
|
||||
signal.signal(signal.SIGINT, aprsd_main.signal_handler)
|
||||
signal.signal(signal.SIGTERM, aprsd_main.signal_handler)
|
||||
|
||||
level, msg = utils._check_version()
|
||||
if level:
|
||||
LOG.warning(msg)
|
||||
else:
|
||||
LOG.info(msg)
|
||||
LOG.info(f"APRSD Started version: {aprsd.__version__}")
|
||||
# Dump all the config options now.
|
||||
CONF.log_opt_values(LOG, logging.DEBUG)
|
||||
|
||||
async_mode = "threading"
|
||||
sio = socketio.Server(logger=True, async_mode=async_mode)
|
||||
aprsd_wsgi.app.wsgi_app = socketio.WSGIApp(sio, aprsd_wsgi.app.wsgi_app)
|
||||
aprsd_wsgi.init_app()
|
||||
sio.register_namespace(aprsd_wsgi.LoggingNamespace("/logs"))
|
||||
CONF.log_opt_values(LOG, logging.DEBUG)
|
||||
aprsd_wsgi.app.run(
|
||||
threaded=True,
|
||||
debug=False,
|
||||
port=CONF.admin.web_port,
|
||||
host=CONF.admin.web_ip,
|
||||
)
|
@ -1,5 +1,5 @@
|
||||
import click
|
||||
import click.shell_completion
|
||||
import click_completion
|
||||
|
||||
from aprsd.main import cli
|
||||
|
||||
@ -7,16 +7,30 @@ from aprsd.main import cli
|
||||
CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
|
||||
|
||||
|
||||
@cli.command()
|
||||
@click.argument("shell", type=click.Choice(list(click.shell_completion._available_shells)))
|
||||
def completion(shell):
|
||||
"""Show the shell completion code"""
|
||||
from click.utils import _detect_program_name
|
||||
@cli.group(help="Click Completion subcommands", context_settings=CONTEXT_SETTINGS)
|
||||
@click.pass_context
|
||||
def completion(ctx):
|
||||
pass
|
||||
|
||||
cls = click.shell_completion.get_completion_class(shell)
|
||||
prog_name = _detect_program_name()
|
||||
complete_var = f"_{prog_name}_COMPLETE".replace("-", "_").upper()
|
||||
print(cls(cli, {}, prog_name, complete_var).source())
|
||||
print("# Add the following line to your shell configuration file to have aprsd command line completion")
|
||||
print("# but remove the leading '#' character.")
|
||||
print(f"# eval \"$(aprsd completion {shell})\"")
|
||||
|
||||
# show dumps out the completion code for a particular shell
|
||||
@completion.command(help="Show completion code for shell", name="show")
|
||||
@click.option("-i", "--case-insensitive/--no-case-insensitive", help="Case insensitive completion")
|
||||
@click.argument("shell", required=False, type=click_completion.DocumentedChoice(click_completion.core.shells))
|
||||
def show(shell, case_insensitive):
|
||||
"""Show the click-completion-command completion code"""
|
||||
extra_env = {"_CLICK_COMPLETION_COMMAND_CASE_INSENSITIVE_COMPLETE": "ON"} if case_insensitive else {}
|
||||
click.echo(click_completion.core.get_code(shell, extra_env=extra_env))
|
||||
|
||||
|
||||
# install will install the completion code for a particular shell
|
||||
@completion.command(help="Install completion code for a shell", name="install")
|
||||
@click.option("--append/--overwrite", help="Append the completion code to the file", default=None)
|
||||
@click.option("-i", "--case-insensitive/--no-case-insensitive", help="Case insensitive completion")
|
||||
@click.argument("shell", required=False, type=click_completion.DocumentedChoice(click_completion.core.shells))
|
||||
@click.argument("path", required=False)
|
||||
def install(append, case_insensitive, shell, path):
|
||||
"""Install the click-completion-command completion"""
|
||||
extra_env = {"_CLICK_COMPLETION_COMMAND_CASE_INSENSITIVE_COMPLETE": "ON"} if case_insensitive else {}
|
||||
shell, path = click_completion.core.install(shell=shell, path=path, append=append, extra_env=extra_env)
|
||||
click.echo(f"{shell} completion installed in {path}")
|
||||
|
@ -8,9 +8,8 @@ import logging
|
||||
import click
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd import cli_helper, conf, packets, plugin
|
||||
# local imports here
|
||||
from aprsd.client import base
|
||||
from aprsd import cli_helper, client, conf, packets, plugin
|
||||
from aprsd.main import cli
|
||||
from aprsd.utils import trace
|
||||
|
||||
@ -97,7 +96,7 @@ def test_plugin(
|
||||
if CONF.trace_enabled:
|
||||
trace.setup_tracing(["method", "api"])
|
||||
|
||||
base.APRSClient()
|
||||
client.Client()
|
||||
|
||||
pm = plugin.PluginManager()
|
||||
if load_all:
|
||||
|
@ -1,9 +1,10 @@
|
||||
# Fetch active stats from a remote running instance of aprsd admin web interface.
|
||||
# Fetch active stats from a remote running instance of aprsd server
|
||||
# This uses the RPC server to fetch the stats from the remote server.
|
||||
|
||||
import logging
|
||||
|
||||
import click
|
||||
from oslo_config import cfg
|
||||
import requests
|
||||
from rich.console import Console
|
||||
from rich.table import Table
|
||||
|
||||
@ -11,7 +12,7 @@ from rich.table import Table
|
||||
import aprsd
|
||||
from aprsd import cli_helper
|
||||
from aprsd.main import cli
|
||||
from aprsd.threads.stats import StatsStore
|
||||
from aprsd.rpc import client as rpc_client
|
||||
|
||||
|
||||
# setup the global logger
|
||||
@ -25,80 +26,83 @@ CONF = cfg.CONF
|
||||
@click.option(
|
||||
"--host", type=str,
|
||||
default=None,
|
||||
help="IP address of the remote aprsd admin web ui fetch stats from.",
|
||||
help="IP address of the remote aprsd server to fetch stats from.",
|
||||
)
|
||||
@click.option(
|
||||
"--port", type=int,
|
||||
default=None,
|
||||
help="Port of the remote aprsd web admin interface to fetch stats from.",
|
||||
help="Port of the remote aprsd server rpc port to fetch stats from.",
|
||||
)
|
||||
@click.option(
|
||||
"--magic-word", type=str,
|
||||
default=None,
|
||||
help="Magic word of the remote aprsd server rpc port to fetch stats from.",
|
||||
)
|
||||
@click.pass_context
|
||||
@cli_helper.process_standard_options
|
||||
def fetch_stats(ctx, host, port):
|
||||
"""Fetch stats from a APRSD admin web interface."""
|
||||
console = Console()
|
||||
console.print(f"APRSD Fetch-Stats started version: {aprsd.__version__}")
|
||||
def fetch_stats(ctx, host, port, magic_word):
|
||||
"""Fetch stats from a remote running instance of aprsd server."""
|
||||
LOG.info(f"APRSD Fetch-Stats started version: {aprsd.__version__}")
|
||||
|
||||
CONF.log_opt_values(LOG, logging.DEBUG)
|
||||
if not host:
|
||||
host = CONF.admin.web_ip
|
||||
host = CONF.rpc_settings.ip
|
||||
if not port:
|
||||
port = CONF.admin.web_port
|
||||
port = CONF.rpc_settings.port
|
||||
if not magic_word:
|
||||
magic_word = CONF.rpc_settings.magic_word
|
||||
|
||||
msg = f"Fetching stats from {host}:{port}"
|
||||
msg = f"Fetching stats from {host}:{port} with magic word '{magic_word}'"
|
||||
console = Console()
|
||||
console.print(msg)
|
||||
with console.status(msg):
|
||||
response = requests.get(f"http://{host}:{port}/stats", timeout=120)
|
||||
if not response:
|
||||
console.print(
|
||||
f"Failed to fetch stats from {host}:{port}?",
|
||||
style="bold red",
|
||||
)
|
||||
return
|
||||
|
||||
stats = response.json()
|
||||
if not stats:
|
||||
console.print(
|
||||
f"Failed to fetch stats from aprsd admin ui at {host}:{port}",
|
||||
style="bold red",
|
||||
)
|
||||
return
|
||||
|
||||
client = rpc_client.RPCClient(host, port, magic_word)
|
||||
stats = client.get_stats_dict()
|
||||
console.print_json(data=stats)
|
||||
aprsd_title = (
|
||||
"APRSD "
|
||||
f"[bold cyan]v{stats['APRSDStats']['version']}[/] "
|
||||
f"Callsign [bold green]{stats['APRSDStats']['callsign']}[/] "
|
||||
f"Uptime [bold yellow]{stats['APRSDStats']['uptime']}[/]"
|
||||
f"[bold cyan]v{stats['aprsd']['version']}[/] "
|
||||
f"Callsign [bold green]{stats['aprsd']['callsign']}[/] "
|
||||
f"Uptime [bold yellow]{stats['aprsd']['uptime']}[/]"
|
||||
)
|
||||
|
||||
console.rule(f"Stats from {host}:{port}")
|
||||
console.rule(f"Stats from {host}:{port} with magic word '{magic_word}'")
|
||||
console.print("\n\n")
|
||||
console.rule(aprsd_title)
|
||||
|
||||
# Show the connection to APRS
|
||||
# It can be a connection to an APRS-IS server or a local TNC via KISS or KISSTCP
|
||||
if "aprs-is" in stats:
|
||||
title = f"APRS-IS Connection {stats['APRSClientStats']['server_string']}"
|
||||
title = f"APRS-IS Connection {stats['aprs-is']['server']}"
|
||||
table = Table(title=title)
|
||||
table.add_column("Key")
|
||||
table.add_column("Value")
|
||||
for key, value in stats["APRSClientStats"].items():
|
||||
for key, value in stats["aprs-is"].items():
|
||||
table.add_row(key, value)
|
||||
console.print(table)
|
||||
|
||||
threads_table = Table(title="Threads")
|
||||
threads_table.add_column("Name")
|
||||
threads_table.add_column("Alive?")
|
||||
for name, alive in stats["APRSDThreadList"].items():
|
||||
for name, alive in stats["aprsd"]["threads"].items():
|
||||
threads_table.add_row(name, str(alive))
|
||||
|
||||
console.print(threads_table)
|
||||
|
||||
msgs_table = Table(title="Messages")
|
||||
msgs_table.add_column("Key")
|
||||
msgs_table.add_column("Value")
|
||||
for key, value in stats["messages"].items():
|
||||
msgs_table.add_row(key, str(value))
|
||||
|
||||
console.print(msgs_table)
|
||||
|
||||
packet_totals = Table(title="Packet Totals")
|
||||
packet_totals.add_column("Key")
|
||||
packet_totals.add_column("Value")
|
||||
packet_totals.add_row("Total Received", str(stats["PacketList"]["rx"]))
|
||||
packet_totals.add_row("Total Sent", str(stats["PacketList"]["tx"]))
|
||||
packet_totals.add_row("Total Received", str(stats["packets"]["total_received"]))
|
||||
packet_totals.add_row("Total Sent", str(stats["packets"]["total_sent"]))
|
||||
packet_totals.add_row("Total Tracked", str(stats["packets"]["total_tracked"]))
|
||||
console.print(packet_totals)
|
||||
|
||||
# Show each of the packet types
|
||||
@ -106,206 +110,47 @@ def fetch_stats(ctx, host, port):
|
||||
packets_table.add_column("Packet Type")
|
||||
packets_table.add_column("TX")
|
||||
packets_table.add_column("RX")
|
||||
for key, value in stats["PacketList"]["packets"].items():
|
||||
for key, value in stats["packets"]["by_type"].items():
|
||||
packets_table.add_row(key, str(value["tx"]), str(value["rx"]))
|
||||
|
||||
console.print(packets_table)
|
||||
|
||||
if "plugins" in stats:
|
||||
count = len(stats["PluginManager"])
|
||||
count = len(stats["plugins"])
|
||||
plugins_table = Table(title=f"Plugins ({count})")
|
||||
plugins_table.add_column("Plugin")
|
||||
plugins_table.add_column("Enabled")
|
||||
plugins_table.add_column("Version")
|
||||
plugins_table.add_column("TX")
|
||||
plugins_table.add_column("RX")
|
||||
plugins = stats["PluginManager"]
|
||||
for key, value in plugins.items():
|
||||
for key, value in stats["plugins"].items():
|
||||
plugins_table.add_row(
|
||||
key,
|
||||
str(plugins[key]["enabled"]),
|
||||
plugins[key]["version"],
|
||||
str(plugins[key]["tx"]),
|
||||
str(plugins[key]["rx"]),
|
||||
str(stats["plugins"][key]["enabled"]),
|
||||
stats["plugins"][key]["version"],
|
||||
str(stats["plugins"][key]["tx"]),
|
||||
str(stats["plugins"][key]["rx"]),
|
||||
)
|
||||
|
||||
console.print(plugins_table)
|
||||
|
||||
seen_list = stats.get("SeenList")
|
||||
|
||||
if seen_list:
|
||||
count = len(seen_list)
|
||||
if "seen_list" in stats["aprsd"]:
|
||||
count = len(stats["aprsd"]["seen_list"])
|
||||
seen_table = Table(title=f"Seen List ({count})")
|
||||
seen_table.add_column("Callsign")
|
||||
seen_table.add_column("Message Count")
|
||||
seen_table.add_column("Last Heard")
|
||||
for key, value in seen_list.items():
|
||||
for key, value in stats["aprsd"]["seen_list"].items():
|
||||
seen_table.add_row(key, str(value["count"]), value["last"])
|
||||
|
||||
console.print(seen_table)
|
||||
|
||||
watch_list = stats.get("WatchList")
|
||||
|
||||
if watch_list:
|
||||
count = len(watch_list)
|
||||
if "watch_list" in stats["aprsd"]:
|
||||
count = len(stats["aprsd"]["watch_list"])
|
||||
watch_table = Table(title=f"Watch List ({count})")
|
||||
watch_table.add_column("Callsign")
|
||||
watch_table.add_column("Last Heard")
|
||||
for key, value in watch_list.items():
|
||||
for key, value in stats["aprsd"]["watch_list"].items():
|
||||
watch_table.add_row(key, value["last"])
|
||||
|
||||
console.print(watch_table)
|
||||
|
||||
|
||||
@cli.command()
|
||||
@cli_helper.add_options(cli_helper.common_options)
|
||||
@click.option(
|
||||
"--raw",
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Dump raw stats instead of formatted output.",
|
||||
)
|
||||
@click.option(
|
||||
"--show-section",
|
||||
default=["All"],
|
||||
help="Show specific sections of the stats. "
|
||||
" Choices: All, APRSDStats, APRSDThreadList, APRSClientStats,"
|
||||
" PacketList, SeenList, WatchList",
|
||||
multiple=True,
|
||||
type=click.Choice(
|
||||
[
|
||||
"All",
|
||||
"APRSDStats",
|
||||
"APRSDThreadList",
|
||||
"APRSClientStats",
|
||||
"PacketList",
|
||||
"SeenList",
|
||||
"WatchList",
|
||||
],
|
||||
case_sensitive=False,
|
||||
),
|
||||
)
|
||||
@click.pass_context
|
||||
@cli_helper.process_standard_options
|
||||
def dump_stats(ctx, raw, show_section):
|
||||
"""Dump the current stats from the running APRSD instance."""
|
||||
console = Console()
|
||||
console.print(f"APRSD Dump-Stats started version: {aprsd.__version__}")
|
||||
|
||||
with console.status("Dumping stats"):
|
||||
ss = StatsStore()
|
||||
ss.load()
|
||||
stats = ss.data
|
||||
if raw:
|
||||
if "All" in show_section:
|
||||
console.print(stats)
|
||||
return
|
||||
else:
|
||||
for section in show_section:
|
||||
console.print(f"Dumping {section} section:")
|
||||
console.print(stats[section])
|
||||
return
|
||||
|
||||
t = Table(title="APRSD Stats")
|
||||
t.add_column("Key")
|
||||
t.add_column("Value")
|
||||
for key, value in stats["APRSDStats"].items():
|
||||
t.add_row(key, str(value))
|
||||
|
||||
if "All" in show_section or "APRSDStats" in show_section:
|
||||
console.print(t)
|
||||
|
||||
# Show the thread list
|
||||
t = Table(title="Thread List")
|
||||
t.add_column("Name")
|
||||
t.add_column("Class")
|
||||
t.add_column("Alive?")
|
||||
t.add_column("Loop Count")
|
||||
t.add_column("Age")
|
||||
for name, value in stats["APRSDThreadList"].items():
|
||||
t.add_row(
|
||||
name,
|
||||
value["class"],
|
||||
str(value["alive"]),
|
||||
str(value["loop_count"]),
|
||||
str(value["age"]),
|
||||
)
|
||||
|
||||
if "All" in show_section or "APRSDThreadList" in show_section:
|
||||
console.print(t)
|
||||
|
||||
# Show the plugins
|
||||
t = Table(title="Plugin List")
|
||||
t.add_column("Name")
|
||||
t.add_column("Enabled")
|
||||
t.add_column("Version")
|
||||
t.add_column("TX")
|
||||
t.add_column("RX")
|
||||
for name, value in stats["PluginManager"].items():
|
||||
t.add_row(
|
||||
name,
|
||||
str(value["enabled"]),
|
||||
value["version"],
|
||||
str(value["tx"]),
|
||||
str(value["rx"]),
|
||||
)
|
||||
|
||||
if "All" in show_section or "PluginManager" in show_section:
|
||||
console.print(t)
|
||||
|
||||
# Now show the client stats
|
||||
t = Table(title="Client Stats")
|
||||
t.add_column("Key")
|
||||
t.add_column("Value")
|
||||
for key, value in stats["APRSClientStats"].items():
|
||||
t.add_row(key, str(value))
|
||||
|
||||
if "All" in show_section or "APRSClientStats" in show_section:
|
||||
console.print(t)
|
||||
|
||||
# now show the packet list
|
||||
packet_list = stats.get("PacketList")
|
||||
t = Table(title="Packet List")
|
||||
t.add_column("Key")
|
||||
t.add_column("Value")
|
||||
t.add_row("Total Received", str(packet_list["rx"]))
|
||||
t.add_row("Total Sent", str(packet_list["tx"]))
|
||||
|
||||
if "All" in show_section or "PacketList" in show_section:
|
||||
console.print(t)
|
||||
|
||||
# now show the seen list
|
||||
seen_list = stats.get("SeenList")
|
||||
sorted_seen_list = sorted(
|
||||
seen_list.items(),
|
||||
)
|
||||
t = Table(title="Seen List")
|
||||
t.add_column("Callsign")
|
||||
t.add_column("Message Count")
|
||||
t.add_column("Last Heard")
|
||||
for key, value in sorted_seen_list:
|
||||
t.add_row(
|
||||
key,
|
||||
str(value["count"]),
|
||||
str(value["last"]),
|
||||
)
|
||||
|
||||
if "All" in show_section or "SeenList" in show_section:
|
||||
console.print(t)
|
||||
|
||||
# now show the watch list
|
||||
watch_list = stats.get("WatchList")
|
||||
sorted_watch_list = sorted(
|
||||
watch_list.items(),
|
||||
)
|
||||
t = Table(title="Watch List")
|
||||
t.add_column("Callsign")
|
||||
t.add_column("Last Heard")
|
||||
for key, value in sorted_watch_list:
|
||||
t.add_row(
|
||||
key,
|
||||
str(value["last"]),
|
||||
)
|
||||
|
||||
if "All" in show_section or "WatchList" in show_section:
|
||||
console.print(t)
|
||||
|
@ -13,11 +13,11 @@ from oslo_config import cfg
|
||||
from rich.console import Console
|
||||
|
||||
import aprsd
|
||||
from aprsd import cli_helper
|
||||
from aprsd import cli_helper, utils
|
||||
from aprsd import conf # noqa
|
||||
# local imports here
|
||||
from aprsd.main import cli
|
||||
from aprsd.threads import stats as stats_threads
|
||||
from aprsd.rpc import client as aprsd_rpc_client
|
||||
|
||||
|
||||
# setup the global logger
|
||||
@ -39,48 +39,46 @@ console = Console()
|
||||
@cli_helper.process_standard_options
|
||||
def healthcheck(ctx, timeout):
|
||||
"""Check the health of the running aprsd server."""
|
||||
ver_str = f"APRSD HealthCheck version: {aprsd.__version__}"
|
||||
console.log(ver_str)
|
||||
console.log(f"APRSD HealthCheck version: {aprsd.__version__}")
|
||||
if not CONF.rpc_settings.enabled:
|
||||
LOG.error("Must enable rpc_settings.enabled to use healthcheck")
|
||||
sys.exit(-1)
|
||||
if not CONF.rpc_settings.ip:
|
||||
LOG.error("Must enable rpc_settings.ip to use healthcheck")
|
||||
sys.exit(-1)
|
||||
if not CONF.rpc_settings.magic_word:
|
||||
LOG.error("Must enable rpc_settings.magic_word to use healthcheck")
|
||||
sys.exit(-1)
|
||||
|
||||
with console.status(ver_str):
|
||||
with console.status(f"APRSD HealthCheck version: {aprsd.__version__}") as status:
|
||||
try:
|
||||
stats_obj = stats_threads.StatsStore()
|
||||
stats_obj.load()
|
||||
stats = stats_obj.data
|
||||
# console.print(stats)
|
||||
status.update(f"Contacting APRSD via RPC {CONF.rpc_settings.ip}")
|
||||
stats = aprsd_rpc_client.RPCClient().get_stats_dict()
|
||||
except Exception as ex:
|
||||
console.log(f"Failed to load stats: '{ex}'")
|
||||
console.log(f"Failed to fetch healthcheck : '{ex}'")
|
||||
sys.exit(-1)
|
||||
else:
|
||||
now = datetime.datetime.now()
|
||||
if not stats:
|
||||
console.log("No stats from aprsd")
|
||||
sys.exit(-1)
|
||||
email_thread_last_update = stats["email"]["thread_last_update"]
|
||||
|
||||
email_stats = stats.get("EmailStats")
|
||||
if email_stats:
|
||||
email_thread_last_update = email_stats["last_check_time"]
|
||||
|
||||
if email_thread_last_update != "never":
|
||||
d = now - email_thread_last_update
|
||||
max_timeout = {"hours": 0.0, "minutes": 5, "seconds": 0}
|
||||
max_delta = datetime.timedelta(**max_timeout)
|
||||
if d > max_delta:
|
||||
console.log(f"Email thread is very old! {d}")
|
||||
sys.exit(-1)
|
||||
|
||||
client_stats = stats.get("APRSClientStats")
|
||||
if not client_stats:
|
||||
console.log("No APRSClientStats")
|
||||
sys.exit(-1)
|
||||
else:
|
||||
aprsis_last_update = client_stats["server_keepalive"]
|
||||
d = now - aprsis_last_update
|
||||
if email_thread_last_update != "never":
|
||||
delta = utils.parse_delta_str(email_thread_last_update)
|
||||
d = datetime.timedelta(**delta)
|
||||
max_timeout = {"hours": 0.0, "minutes": 5, "seconds": 0}
|
||||
max_delta = datetime.timedelta(**max_timeout)
|
||||
if d > max_delta:
|
||||
LOG.error(f"APRS-IS last update is very old! {d}")
|
||||
console.log(f"Email thread is very old! {d}")
|
||||
sys.exit(-1)
|
||||
|
||||
console.log("OK")
|
||||
aprsis_last_update = stats["aprs-is"]["last_update"]
|
||||
delta = utils.parse_delta_str(aprsis_last_update)
|
||||
d = datetime.timedelta(**delta)
|
||||
max_timeout = {"hours": 0.0, "minutes": 5, "seconds": 0}
|
||||
max_delta = datetime.timedelta(**max_timeout)
|
||||
if d > max_delta:
|
||||
LOG.error(f"APRS-IS last update is very old! {d}")
|
||||
sys.exit(-1)
|
||||
|
||||
sys.exit(0)
|
||||
|
@ -21,7 +21,7 @@ from aprsd import cli_helper
|
||||
from aprsd import plugin as aprsd_plugin
|
||||
from aprsd.main import cli
|
||||
from aprsd.plugins import (
|
||||
email, fortune, location, notify, ping, time, version, weather,
|
||||
email, fortune, location, notify, ping, query, time, version, weather,
|
||||
)
|
||||
|
||||
|
||||
@ -122,7 +122,7 @@ def get_installed_extensions():
|
||||
|
||||
|
||||
def show_built_in_plugins(console):
|
||||
modules = [email, fortune, location, notify, ping, time, version, weather]
|
||||
modules = [email, fortune, location, notify, ping, query, time, version, weather]
|
||||
plugins = []
|
||||
|
||||
for module in modules:
|
||||
|
@ -10,29 +10,21 @@ import sys
|
||||
import time
|
||||
|
||||
import click
|
||||
from loguru import logger
|
||||
from oslo_config import cfg
|
||||
from rich.console import Console
|
||||
|
||||
# local imports here
|
||||
import aprsd
|
||||
from aprsd import cli_helper, packets, plugin, threads, utils
|
||||
from aprsd.client import client_factory
|
||||
from aprsd import cli_helper, client, packets, plugin, stats, threads
|
||||
from aprsd.main import cli
|
||||
from aprsd.packets import collector as packet_collector
|
||||
from aprsd.packets import log as packet_log
|
||||
from aprsd.packets import seen_list
|
||||
from aprsd.stats import collector
|
||||
from aprsd.threads import keep_alive, rx
|
||||
from aprsd.threads import stats as stats_thread
|
||||
from aprsd.threads.aprsd import APRSDThread
|
||||
from aprsd.rpc import server as rpc_server
|
||||
from aprsd.threads import rx
|
||||
|
||||
|
||||
# setup the global logger
|
||||
# log.basicConfig(level=log.DEBUG) # level=10
|
||||
LOG = logging.getLogger("APRSD")
|
||||
CONF = cfg.CONF
|
||||
LOGU = logger
|
||||
console = Console()
|
||||
|
||||
|
||||
@ -45,93 +37,45 @@ def signal_handler(sig, frame):
|
||||
),
|
||||
)
|
||||
time.sleep(5)
|
||||
# Last save to disk
|
||||
collector.Collector().collect()
|
||||
LOG.info(stats.APRSDStats())
|
||||
|
||||
|
||||
class APRSDListenThread(rx.APRSDRXThread):
|
||||
def __init__(
|
||||
self, packet_queue, packet_filter=None, plugin_manager=None,
|
||||
enabled_plugins=[], log_packets=False,
|
||||
):
|
||||
def __init__(self, packet_queue, packet_filter=None, plugin_manager=None):
|
||||
super().__init__(packet_queue)
|
||||
self.packet_filter = packet_filter
|
||||
self.plugin_manager = plugin_manager
|
||||
if self.plugin_manager:
|
||||
LOG.info(f"Plugins {self.plugin_manager.get_message_plugins()}")
|
||||
self.log_packets = log_packets
|
||||
|
||||
def process_packet(self, *args, **kwargs):
|
||||
packet = self._client.decode_packet(*args, **kwargs)
|
||||
filters = {
|
||||
packets.Packet.__name__: packets.Packet,
|
||||
packets.AckPacket.__name__: packets.AckPacket,
|
||||
packets.BeaconPacket.__name__: packets.BeaconPacket,
|
||||
packets.GPSPacket.__name__: packets.GPSPacket,
|
||||
packets.MessagePacket.__name__: packets.MessagePacket,
|
||||
packets.MicEPacket.__name__: packets.MicEPacket,
|
||||
packets.ObjectPacket.__name__: packets.ObjectPacket,
|
||||
packets.StatusPacket.__name__: packets.StatusPacket,
|
||||
packets.ThirdPartyPacket.__name__: packets.ThirdPartyPacket,
|
||||
packets.WeatherPacket.__name__: packets.WeatherPacket,
|
||||
packets.UnknownPacket.__name__: packets.UnknownPacket,
|
||||
}
|
||||
|
||||
if self.packet_filter:
|
||||
filter_class = filters[self.packet_filter]
|
||||
if isinstance(packet, filter_class):
|
||||
if self.log_packets:
|
||||
packet_log.log(packet)
|
||||
packet.log(header="RX")
|
||||
if self.plugin_manager:
|
||||
# Don't do anything with the reply
|
||||
# This is the listen only command.
|
||||
self.plugin_manager.run(packet)
|
||||
else:
|
||||
if self.log_packets:
|
||||
LOG.error("PISS")
|
||||
packet_log.log(packet)
|
||||
if self.plugin_manager:
|
||||
# Don't do anything with the reply.
|
||||
# This is the listen only command.
|
||||
self.plugin_manager.run(packet)
|
||||
else:
|
||||
packet.log(header="RX")
|
||||
|
||||
packet_collector.PacketCollector().rx(packet)
|
||||
|
||||
|
||||
class ListenStatsThread(APRSDThread):
|
||||
"""Log the stats from the PacketList."""
|
||||
|
||||
def __init__(self):
|
||||
super().__init__("PacketStatsLog")
|
||||
self._last_total_rx = 0
|
||||
|
||||
def loop(self):
|
||||
if self.loop_count % 10 == 0:
|
||||
# log the stats every 10 seconds
|
||||
stats_json = collector.Collector().collect()
|
||||
stats = stats_json["PacketList"]
|
||||
total_rx = stats["rx"]
|
||||
rx_delta = total_rx - self._last_total_rx
|
||||
rate = rx_delta / 10
|
||||
|
||||
# Log summary stats
|
||||
LOGU.opt(colors=True).info(
|
||||
f"<green>RX Rate: {rate} pps</green> "
|
||||
f"<yellow>Total RX: {total_rx}</yellow> "
|
||||
f"<red>RX Last 10 secs: {rx_delta}</red>",
|
||||
)
|
||||
self._last_total_rx = total_rx
|
||||
|
||||
# Log individual type stats
|
||||
for k, v in stats["types"].items():
|
||||
thread_hex = f"fg {utils.hex_from_name(k)}"
|
||||
LOGU.opt(colors=True).info(
|
||||
f"<{thread_hex}>{k:<15}</{thread_hex}> "
|
||||
f"<blue>RX: {v['rx']}</blue> <red>TX: {v['tx']}</red>",
|
||||
)
|
||||
|
||||
time.sleep(1)
|
||||
return True
|
||||
packets.PacketList().rx(packet)
|
||||
|
||||
|
||||
@cli.command()
|
||||
@ -152,27 +96,17 @@ class ListenStatsThread(APRSDThread):
|
||||
"--packet-filter",
|
||||
type=click.Choice(
|
||||
[
|
||||
packets.Packet.__name__,
|
||||
packets.AckPacket.__name__,
|
||||
packets.BeaconPacket.__name__,
|
||||
packets.GPSPacket.__name__,
|
||||
packets.MicEPacket.__name__,
|
||||
packets.MessagePacket.__name__,
|
||||
packets.ObjectPacket.__name__,
|
||||
packets.RejectPacket.__name__,
|
||||
packets.StatusPacket.__name__,
|
||||
packets.ThirdPartyPacket.__name__,
|
||||
packets.UnknownPacket.__name__,
|
||||
packets.WeatherPacket.__name__,
|
||||
],
|
||||
case_sensitive=False,
|
||||
),
|
||||
help="Filter by packet type",
|
||||
)
|
||||
@click.option(
|
||||
"--enable-plugin",
|
||||
multiple=True,
|
||||
help="Enable a plugin. This is the name of the file in the plugins directory.",
|
||||
)
|
||||
@click.option(
|
||||
"--load-plugins",
|
||||
default=False,
|
||||
@ -184,18 +118,6 @@ class ListenStatsThread(APRSDThread):
|
||||
nargs=-1,
|
||||
required=True,
|
||||
)
|
||||
@click.option(
|
||||
"--log-packets",
|
||||
default=False,
|
||||
is_flag=True,
|
||||
help="Log incoming packets.",
|
||||
)
|
||||
@click.option(
|
||||
"--enable-packet-stats",
|
||||
default=False,
|
||||
is_flag=True,
|
||||
help="Enable packet stats periodic logging.",
|
||||
)
|
||||
@click.pass_context
|
||||
@cli_helper.process_standard_options
|
||||
def listen(
|
||||
@ -203,11 +125,8 @@ def listen(
|
||||
aprs_login,
|
||||
aprs_password,
|
||||
packet_filter,
|
||||
enable_plugin,
|
||||
load_plugins,
|
||||
filter,
|
||||
log_packets,
|
||||
enable_packet_stats,
|
||||
):
|
||||
"""Listen to packets on the APRS-IS Network based on FILTER.
|
||||
|
||||
@ -240,73 +159,56 @@ def listen(
|
||||
LOG.info(f"APRSD Listen Started version: {aprsd.__version__}")
|
||||
|
||||
CONF.log_opt_values(LOG, logging.DEBUG)
|
||||
collector.Collector()
|
||||
|
||||
# Try and load saved MsgTrack list
|
||||
LOG.debug("Loading saved MsgTrack object.")
|
||||
|
||||
# Initialize the client factory and create
|
||||
# The correct client object ready for use
|
||||
client.ClientFactory.setup()
|
||||
# Make sure we have 1 client transport enabled
|
||||
if not client_factory.is_client_enabled():
|
||||
if not client.factory.is_client_enabled():
|
||||
LOG.error("No Clients are enabled in config.")
|
||||
sys.exit(-1)
|
||||
|
||||
# Creates the client object
|
||||
LOG.info("Creating client connection")
|
||||
aprs_client = client_factory.create()
|
||||
aprs_client = client.factory.create()
|
||||
LOG.info(aprs_client)
|
||||
|
||||
LOG.debug(f"Filter by '{filter}'")
|
||||
aprs_client.set_filter(filter)
|
||||
|
||||
keepalive = keep_alive.KeepAliveThread()
|
||||
keepalive = threads.KeepAliveThread()
|
||||
keepalive.start()
|
||||
|
||||
if not CONF.enable_seen_list:
|
||||
# just deregister the class from the packet collector
|
||||
packet_collector.PacketCollector().unregister(seen_list.SeenList)
|
||||
if CONF.rpc_settings.enabled:
|
||||
rpc = rpc_server.APRSDRPCThread()
|
||||
rpc.start()
|
||||
|
||||
pm = None
|
||||
pm = plugin.PluginManager()
|
||||
if load_plugins:
|
||||
pm = plugin.PluginManager()
|
||||
LOG.info("Loading plugins")
|
||||
pm.setup_plugins(load_help_plugin=False)
|
||||
elif enable_plugin:
|
||||
pm = plugin.PluginManager()
|
||||
pm.setup_plugins(
|
||||
load_help_plugin=False,
|
||||
plugin_list=enable_plugin,
|
||||
)
|
||||
else:
|
||||
LOG.warning(
|
||||
"Not Loading any plugins use --load-plugins to load what's "
|
||||
"defined in the config file.",
|
||||
)
|
||||
|
||||
if pm:
|
||||
for p in pm.get_plugins():
|
||||
LOG.info("Loaded plugin %s", p.__class__.__name__)
|
||||
|
||||
stats = stats_thread.APRSDStatsStoreThread()
|
||||
stats.start()
|
||||
|
||||
LOG.debug("Create APRSDListenThread")
|
||||
listen_thread = APRSDListenThread(
|
||||
packet_queue=threads.packet_queue,
|
||||
packet_filter=packet_filter,
|
||||
plugin_manager=pm,
|
||||
enabled_plugins=enable_plugin,
|
||||
log_packets=log_packets,
|
||||
)
|
||||
LOG.debug("Start APRSDListenThread")
|
||||
listen_thread.start()
|
||||
if enable_packet_stats:
|
||||
listen_stats = ListenStatsThread()
|
||||
listen_stats.start()
|
||||
|
||||
keepalive.start()
|
||||
LOG.debug("keepalive Join")
|
||||
keepalive.join()
|
||||
LOG.debug("listen_thread Join")
|
||||
listen_thread.join()
|
||||
stats.join()
|
||||
|
||||
if CONF.rpc_settings.enabled:
|
||||
rpc.join()
|
||||
|
@ -8,13 +8,9 @@ import click
|
||||
from oslo_config import cfg
|
||||
|
||||
import aprsd
|
||||
from aprsd import cli_helper, packets
|
||||
from aprsd import cli_helper, client, packets
|
||||
from aprsd import conf # noqa : F401
|
||||
from aprsd.client import client_factory
|
||||
from aprsd.main import cli
|
||||
import aprsd.packets # noqa : F401
|
||||
from aprsd.packets import collector
|
||||
from aprsd.packets import log as packet_log
|
||||
from aprsd.threads import tx
|
||||
|
||||
|
||||
@ -80,6 +76,7 @@ def send_message(
|
||||
aprs_login = CONF.aprs_network.login
|
||||
|
||||
if not aprs_password:
|
||||
LOG.warning(CONF.aprs_network.password)
|
||||
if not CONF.aprs_network.password:
|
||||
click.echo("Must set --aprs-password or APRS_PASSWORD")
|
||||
ctx.exit(-1)
|
||||
@ -96,15 +93,19 @@ def send_message(
|
||||
else:
|
||||
LOG.info(f"L'{aprs_login}' To'{tocallsign}' C'{command}'")
|
||||
|
||||
packets.PacketList()
|
||||
packets.WatchList()
|
||||
packets.SeenList()
|
||||
|
||||
got_ack = False
|
||||
got_response = False
|
||||
|
||||
def rx_packet(packet):
|
||||
global got_ack, got_response
|
||||
cl = client_factory.create()
|
||||
cl = client.factory.create()
|
||||
packet = cl.decode_packet(packet)
|
||||
collector.PacketCollector().rx(packet)
|
||||
packet_log.log(packet, tx=False)
|
||||
packets.PacketList().rx(packet)
|
||||
packet.log("RX")
|
||||
# LOG.debug("Got packet back {}".format(packet))
|
||||
if isinstance(packet, packets.AckPacket):
|
||||
got_ack = True
|
||||
@ -129,7 +130,8 @@ def send_message(
|
||||
sys.exit(0)
|
||||
|
||||
try:
|
||||
client_factory.create().client
|
||||
client.ClientFactory.setup()
|
||||
client.factory.create().client
|
||||
except LoginError:
|
||||
sys.exit(-1)
|
||||
|
||||
@ -161,7 +163,7 @@ def send_message(
|
||||
# This will register a packet consumer with aprslib
|
||||
# When new packets come in the consumer will process
|
||||
# the packet
|
||||
aprs_client = client_factory.create().client
|
||||
aprs_client = client.factory.create().client
|
||||
aprs_client.consumer(rx_packet, raw=False)
|
||||
except aprslib.exceptions.ConnectionDrop:
|
||||
LOG.error("Connection dropped, reconnecting")
|
||||
|
@ -6,16 +6,12 @@ import click
|
||||
from oslo_config import cfg
|
||||
|
||||
import aprsd
|
||||
from aprsd import cli_helper
|
||||
from aprsd import cli_helper, client
|
||||
from aprsd import main as aprsd_main
|
||||
from aprsd import plugin, threads, utils
|
||||
from aprsd.client import client_factory
|
||||
from aprsd import packets, plugin, threads, utils
|
||||
from aprsd.main import cli
|
||||
from aprsd.packets import collector as packet_collector
|
||||
from aprsd.packets import seen_list
|
||||
from aprsd.threads import keep_alive, log_monitor, registry, rx
|
||||
from aprsd.threads import stats as stats_thread
|
||||
from aprsd.threads import tx
|
||||
from aprsd.rpc import server as rpc_server
|
||||
from aprsd.threads import registry, rx, tx
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
@ -50,14 +46,7 @@ def server(ctx, flush):
|
||||
|
||||
# Initialize the client factory and create
|
||||
# The correct client object ready for use
|
||||
if not client_factory.is_client_enabled():
|
||||
LOG.error("No Clients are enabled in config.")
|
||||
sys.exit(-1)
|
||||
|
||||
# Creates the client object
|
||||
LOG.info("Creating client connection")
|
||||
aprs_client = client_factory.create()
|
||||
LOG.info(aprs_client)
|
||||
client.ClientFactory.setup()
|
||||
|
||||
# Create the initial PM singleton and Register plugins
|
||||
# We register plugins first here so we can register each
|
||||
@ -79,35 +68,35 @@ def server(ctx, flush):
|
||||
LOG.info(p)
|
||||
|
||||
# Make sure we have 1 client transport enabled
|
||||
if not client_factory.is_client_enabled():
|
||||
if not client.factory.is_client_enabled():
|
||||
LOG.error("No Clients are enabled in config.")
|
||||
sys.exit(-1)
|
||||
|
||||
if not client_factory.is_client_configured():
|
||||
if not client.factory.is_client_configured():
|
||||
LOG.error("APRS client is not properly configured in config file.")
|
||||
sys.exit(-1)
|
||||
|
||||
if not CONF.enable_seen_list:
|
||||
# just deregister the class from the packet collector
|
||||
packet_collector.PacketCollector().unregister(seen_list.SeenList)
|
||||
# Creates the client object
|
||||
# LOG.info("Creating client connection")
|
||||
# client.factory.create().client
|
||||
|
||||
# Now load the msgTrack from disk if any
|
||||
packets.PacketList()
|
||||
if flush:
|
||||
LOG.debug("Flushing All packet tracking objects.")
|
||||
packet_collector.PacketCollector().flush()
|
||||
LOG.debug("Deleting saved MsgTrack.")
|
||||
packets.PacketTrack().flush()
|
||||
packets.WatchList().flush()
|
||||
packets.SeenList().flush()
|
||||
else:
|
||||
# Try and load saved MsgTrack list
|
||||
LOG.debug("Loading saved packet tracking data.")
|
||||
packet_collector.PacketCollector().load()
|
||||
LOG.debug("Loading saved MsgTrack object.")
|
||||
packets.PacketTrack().load()
|
||||
packets.WatchList().load()
|
||||
packets.SeenList().load()
|
||||
|
||||
# Now start all the main processing threads.
|
||||
|
||||
keepalive = keep_alive.KeepAliveThread()
|
||||
keepalive = threads.KeepAliveThread()
|
||||
keepalive.start()
|
||||
|
||||
stats_store_thread = stats_thread.APRSDStatsStoreThread()
|
||||
stats_store_thread.start()
|
||||
|
||||
rx_thread = rx.APRSDPluginRXThread(
|
||||
packet_queue=threads.packet_queue,
|
||||
)
|
||||
@ -117,6 +106,7 @@ def server(ctx, flush):
|
||||
rx_thread.start()
|
||||
process_thread.start()
|
||||
|
||||
packets.PacketTrack().restart()
|
||||
if CONF.enable_beacon:
|
||||
LOG.info("Beacon Enabled. Starting Beacon thread.")
|
||||
bcn_thread = tx.BeaconSendThread()
|
||||
@ -127,9 +117,11 @@ def server(ctx, flush):
|
||||
registry_thread = registry.APRSRegistryThread()
|
||||
registry_thread.start()
|
||||
|
||||
if CONF.admin.web_enabled:
|
||||
log_monitor_thread = log_monitor.LogMonitorThread()
|
||||
log_monitor_thread.start()
|
||||
if CONF.rpc_settings.enabled:
|
||||
rpc = rpc_server.APRSDRPCThread()
|
||||
rpc.start()
|
||||
log_monitor = threads.log_monitor.LogMonitorThread()
|
||||
log_monitor.start()
|
||||
|
||||
rx_thread.join()
|
||||
process_thread.join()
|
||||
|
@ -7,6 +7,7 @@ import sys
|
||||
import threading
|
||||
import time
|
||||
|
||||
from aprslib import util as aprslib_util
|
||||
import click
|
||||
import flask
|
||||
from flask import request
|
||||
@ -21,15 +22,15 @@ import aprsd
|
||||
from aprsd import (
|
||||
cli_helper, client, packets, plugin_utils, stats, threads, utils,
|
||||
)
|
||||
from aprsd.client import client_factory, kiss
|
||||
from aprsd.log import log
|
||||
from aprsd.main import cli
|
||||
from aprsd.threads import aprsd as aprsd_threads
|
||||
from aprsd.threads import keep_alive, rx, tx
|
||||
from aprsd.threads import rx, tx
|
||||
from aprsd.utils import trace
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger()
|
||||
LOG = logging.getLogger("APRSD")
|
||||
auth = HTTPBasicAuth()
|
||||
users = {}
|
||||
socketio = None
|
||||
@ -62,7 +63,9 @@ def signal_handler(sig, frame):
|
||||
threads.APRSDThreadList().stop_all()
|
||||
if "subprocess" not in str(frame):
|
||||
time.sleep(1.5)
|
||||
stats.stats_collector.collect()
|
||||
# packets.WatchList().save()
|
||||
# packets.SeenList().save()
|
||||
LOG.info(stats.APRSDStats())
|
||||
LOG.info("Telling flask to bail.")
|
||||
signal.signal(signal.SIGTERM, sys.exit(0))
|
||||
|
||||
@ -332,6 +335,7 @@ class WebChatProcessPacketThread(rx.APRSDProcessPacketThread):
|
||||
|
||||
def process_our_message_packet(self, packet: packets.MessagePacket):
|
||||
global callsign_locations
|
||||
LOG.info(f"process MessagePacket {repr(packet)}")
|
||||
# ok lets see if we have the location for the
|
||||
# person we just sent a message to.
|
||||
from_call = packet.get("from_call").upper()
|
||||
@ -377,10 +381,10 @@ def _get_transport(stats):
|
||||
transport = "aprs-is"
|
||||
aprs_connection = (
|
||||
"APRS-IS Server: <a href='http://status.aprs2.net' >"
|
||||
"{}</a>".format(stats["APRSClientStats"]["server_string"])
|
||||
"{}</a>".format(stats["stats"]["aprs-is"]["server"])
|
||||
)
|
||||
elif kiss.KISSClient.is_enabled():
|
||||
transport = kiss.KISSClient.transport()
|
||||
elif client.KISSClient.is_enabled():
|
||||
transport = client.KISSClient.transport()
|
||||
if transport == client.TRANSPORT_TCPKISS:
|
||||
aprs_connection = (
|
||||
"TCPKISS://{}:{}".format(
|
||||
@ -418,7 +422,7 @@ def index():
|
||||
html_template = "index.html"
|
||||
LOG.debug(f"Template {html_template}")
|
||||
|
||||
transport, aprs_connection = _get_transport(stats["stats"])
|
||||
transport, aprs_connection = _get_transport(stats)
|
||||
LOG.debug(f"transport {transport} aprs_connection {aprs_connection}")
|
||||
|
||||
stats["transport"] = transport
|
||||
@ -453,28 +457,27 @@ def send_message_status():
|
||||
|
||||
|
||||
def _stats():
|
||||
stats_obj = stats.APRSDStats()
|
||||
now = datetime.datetime.now()
|
||||
|
||||
time_format = "%m-%d-%Y %H:%M:%S"
|
||||
stats_dict = stats.stats_collector.collect(serializable=True)
|
||||
stats_dict = stats_obj.stats()
|
||||
# Webchat doesnt need these
|
||||
if "WatchList" in stats_dict:
|
||||
del stats_dict["WatchList"]
|
||||
if "SeenList" in stats_dict:
|
||||
del stats_dict["SeenList"]
|
||||
if "APRSDThreadList" in stats_dict:
|
||||
del stats_dict["APRSDThreadList"]
|
||||
if "PacketList" in stats_dict:
|
||||
del stats_dict["PacketList"]
|
||||
if "EmailStats" in stats_dict:
|
||||
del stats_dict["EmailStats"]
|
||||
if "PluginManager" in stats_dict:
|
||||
del stats_dict["PluginManager"]
|
||||
if "watch_list" in stats_dict["aprsd"]:
|
||||
del stats_dict["aprsd"]["watch_list"]
|
||||
if "seen_list" in stats_dict["aprsd"]:
|
||||
del stats_dict["aprsd"]["seen_list"]
|
||||
if "threads" in stats_dict["aprsd"]:
|
||||
del stats_dict["aprsd"]["threads"]
|
||||
# del stats_dict["email"]
|
||||
# del stats_dict["plugins"]
|
||||
# del stats_dict["messages"]
|
||||
|
||||
result = {
|
||||
"time": now.strftime(time_format),
|
||||
"stats": stats_dict,
|
||||
}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@ -538,27 +541,18 @@ class SendMessageNamespace(Namespace):
|
||||
|
||||
def on_gps(self, data):
|
||||
LOG.debug(f"WS on_GPS: {data}")
|
||||
lat = data["latitude"]
|
||||
long = data["longitude"]
|
||||
LOG.debug(f"Lat {lat}")
|
||||
LOG.debug(f"Long {long}")
|
||||
path = data.get("path", None)
|
||||
if not path:
|
||||
path = []
|
||||
elif "," in path:
|
||||
path_opts = path.split(",")
|
||||
path = [x.strip() for x in path_opts]
|
||||
else:
|
||||
path = [path]
|
||||
lat = aprslib_util.latitude_to_ddm(data["latitude"])
|
||||
long = aprslib_util.longitude_to_ddm(data["longitude"])
|
||||
LOG.debug(f"Lat DDM {lat}")
|
||||
LOG.debug(f"Long DDM {long}")
|
||||
|
||||
tx.send(
|
||||
packets.BeaconPacket(
|
||||
packets.GPSPacket(
|
||||
from_call=CONF.callsign,
|
||||
to_call="APDW16",
|
||||
latitude=lat,
|
||||
longitude=long,
|
||||
comment="APRSD WebChat Beacon",
|
||||
path=path,
|
||||
),
|
||||
direct=True,
|
||||
)
|
||||
@ -578,6 +572,8 @@ class SendMessageNamespace(Namespace):
|
||||
def init_flask(loglevel, quiet):
|
||||
global socketio, flask_app
|
||||
|
||||
log.setup_logging(loglevel, quiet)
|
||||
|
||||
socketio = SocketIO(
|
||||
flask_app, logger=False, engineio_logger=False,
|
||||
async_mode="threading",
|
||||
@ -628,7 +624,7 @@ def webchat(ctx, flush, port):
|
||||
LOG.info(msg)
|
||||
LOG.info(f"APRSD Started version: {aprsd.__version__}")
|
||||
|
||||
CONF.log_opt_values(logging.getLogger(), logging.DEBUG)
|
||||
CONF.log_opt_values(LOG, logging.DEBUG)
|
||||
user = CONF.admin.user
|
||||
users[user] = generate_password_hash(CONF.admin.password)
|
||||
if not port:
|
||||
@ -636,16 +632,22 @@ def webchat(ctx, flush, port):
|
||||
|
||||
# Initialize the client factory and create
|
||||
# The correct client object ready for use
|
||||
client.ClientFactory.setup()
|
||||
# Make sure we have 1 client transport enabled
|
||||
if not client_factory.is_client_enabled():
|
||||
if not client.factory.is_client_enabled():
|
||||
LOG.error("No Clients are enabled in config.")
|
||||
sys.exit(-1)
|
||||
|
||||
if not client_factory.is_client_configured():
|
||||
if not client.factory.is_client_configured():
|
||||
LOG.error("APRS client is not properly configured in config file.")
|
||||
sys.exit(-1)
|
||||
|
||||
keepalive = keep_alive.KeepAliveThread()
|
||||
packets.PacketList()
|
||||
packets.PacketTrack()
|
||||
packets.WatchList()
|
||||
packets.SeenList()
|
||||
|
||||
keepalive = threads.KeepAliveThread()
|
||||
LOG.info("Start KeepAliveThread")
|
||||
keepalive.start()
|
||||
|
||||
|
@ -15,6 +15,10 @@ watch_list_group = cfg.OptGroup(
|
||||
name="watch_list",
|
||||
title="Watch List settings",
|
||||
)
|
||||
rpc_group = cfg.OptGroup(
|
||||
name="rpc_settings",
|
||||
title="RPC Settings for admin <--> web",
|
||||
)
|
||||
webchat_group = cfg.OptGroup(
|
||||
name="webchat",
|
||||
title="Settings specific to the webchat command",
|
||||
@ -97,51 +101,6 @@ aprsd_opts = [
|
||||
default=None,
|
||||
help="Longitude for the GPS Beacon button. If not set, the button will not be enabled.",
|
||||
),
|
||||
cfg.StrOpt(
|
||||
"log_packet_format",
|
||||
choices=["compact", "multiline", "both"],
|
||||
default="compact",
|
||||
help="When logging packets 'compact' will use a single line formatted for each packet."
|
||||
"'multiline' will use multiple lines for each packet and is the traditional format."
|
||||
"both will log both compact and multiline.",
|
||||
),
|
||||
cfg.IntOpt(
|
||||
"default_packet_send_count",
|
||||
default=3,
|
||||
help="The number of times to send a non ack packet before giving up.",
|
||||
),
|
||||
cfg.IntOpt(
|
||||
"default_ack_send_count",
|
||||
default=3,
|
||||
help="The number of times to send an ack packet in response to recieving a packet.",
|
||||
),
|
||||
cfg.IntOpt(
|
||||
"packet_list_maxlen",
|
||||
default=100,
|
||||
help="The maximum number of packets to store in the packet list.",
|
||||
),
|
||||
cfg.IntOpt(
|
||||
"packet_list_stats_maxlen",
|
||||
default=20,
|
||||
help="The maximum number of packets to send in the stats dict for admin ui.",
|
||||
),
|
||||
cfg.BoolOpt(
|
||||
"enable_seen_list",
|
||||
default=True,
|
||||
help="Enable the Callsign seen list tracking feature. This allows aprsd to keep track of "
|
||||
"callsigns that have been seen and when they were last seen.",
|
||||
),
|
||||
cfg.BoolOpt(
|
||||
"enable_packet_logging",
|
||||
default=True,
|
||||
help="Set this to False, to disable logging of packets to the log file.",
|
||||
),
|
||||
cfg.BoolOpt(
|
||||
"enable_sending_ack_packets",
|
||||
default=True,
|
||||
help="Set this to False, to disable sending of ack packets. This will entirely stop"
|
||||
"APRSD from sending ack packets.",
|
||||
),
|
||||
]
|
||||
|
||||
watch_list_opts = [
|
||||
@ -179,7 +138,7 @@ admin_opts = [
|
||||
default=False,
|
||||
help="Enable the Admin Web Interface",
|
||||
),
|
||||
cfg.StrOpt(
|
||||
cfg.IPOpt(
|
||||
"web_ip",
|
||||
default="0.0.0.0",
|
||||
help="The ip address to listen on",
|
||||
@ -202,6 +161,28 @@ admin_opts = [
|
||||
),
|
||||
]
|
||||
|
||||
rpc_opts = [
|
||||
cfg.BoolOpt(
|
||||
"enabled",
|
||||
default=True,
|
||||
help="Enable RPC calls",
|
||||
),
|
||||
cfg.StrOpt(
|
||||
"ip",
|
||||
default="localhost",
|
||||
help="The ip address to listen on",
|
||||
),
|
||||
cfg.PortOpt(
|
||||
"port",
|
||||
default=18861,
|
||||
help="The port to listen on",
|
||||
),
|
||||
cfg.StrOpt(
|
||||
"magic_word",
|
||||
default=APRSD_DEFAULT_MAGIC_WORD,
|
||||
help="Magic word to authenticate requests between client/server",
|
||||
),
|
||||
]
|
||||
|
||||
enabled_plugins_opts = [
|
||||
cfg.ListOpt(
|
||||
@ -211,6 +192,7 @@ enabled_plugins_opts = [
|
||||
"aprsd.plugins.fortune.FortunePlugin",
|
||||
"aprsd.plugins.location.LocationPlugin",
|
||||
"aprsd.plugins.ping.PingPlugin",
|
||||
"aprsd.plugins.query.QueryPlugin",
|
||||
"aprsd.plugins.time.TimePlugin",
|
||||
"aprsd.plugins.weather.OWMWeatherPlugin",
|
||||
"aprsd.plugins.version.VersionPlugin",
|
||||
@ -223,7 +205,7 @@ enabled_plugins_opts = [
|
||||
]
|
||||
|
||||
webchat_opts = [
|
||||
cfg.StrOpt(
|
||||
cfg.IPOpt(
|
||||
"web_ip",
|
||||
default="0.0.0.0",
|
||||
help="The ip address to listen on",
|
||||
@ -243,15 +225,10 @@ webchat_opts = [
|
||||
default=None,
|
||||
help="Longitude for the GPS Beacon button. If not set, the button will not be enabled.",
|
||||
),
|
||||
cfg.BoolOpt(
|
||||
"disable_url_request_logging",
|
||||
default=False,
|
||||
help="Disable the logging of url requests in the webchat command.",
|
||||
),
|
||||
]
|
||||
|
||||
registry_opts = [
|
||||
cfg.BoolOpt(
|
||||
cfg.StrOpt(
|
||||
"enabled",
|
||||
default=False,
|
||||
help="Enable sending aprs registry information. This will let the "
|
||||
@ -291,6 +268,8 @@ def register_opts(config):
|
||||
config.register_opts(admin_opts, group=admin_group)
|
||||
config.register_group(watch_list_group)
|
||||
config.register_opts(watch_list_opts, group=watch_list_group)
|
||||
config.register_group(rpc_group)
|
||||
config.register_opts(rpc_opts, group=rpc_group)
|
||||
config.register_group(webchat_group)
|
||||
config.register_opts(webchat_opts, group=webchat_group)
|
||||
config.register_group(registry_group)
|
||||
@ -302,6 +281,7 @@ def list_opts():
|
||||
"DEFAULT": (aprsd_opts + enabled_plugins_opts),
|
||||
admin_group.name: admin_opts,
|
||||
watch_list_group.name: watch_list_opts,
|
||||
rpc_group.name: rpc_opts,
|
||||
webchat_group.name: webchat_opts,
|
||||
registry_group.name: registry_opts,
|
||||
}
|
||||
|
@ -31,6 +31,13 @@ aprsfi_opts = [
|
||||
),
|
||||
]
|
||||
|
||||
query_plugin_opts = [
|
||||
cfg.StrOpt(
|
||||
"callsign",
|
||||
help="The Ham callsign to allow access to the query plugin from RF.",
|
||||
),
|
||||
]
|
||||
|
||||
owm_wx_opts = [
|
||||
cfg.StrOpt(
|
||||
"apiKey",
|
||||
@ -165,6 +172,7 @@ def register_opts(config):
|
||||
config.register_group(aprsfi_group)
|
||||
config.register_opts(aprsfi_opts, group=aprsfi_group)
|
||||
config.register_group(query_group)
|
||||
config.register_opts(query_plugin_opts, group=query_group)
|
||||
config.register_group(owm_wx_group)
|
||||
config.register_opts(owm_wx_opts, group=owm_wx_group)
|
||||
config.register_group(avwx_group)
|
||||
@ -176,6 +184,7 @@ def register_opts(config):
|
||||
def list_opts():
|
||||
return {
|
||||
aprsfi_group.name: aprsfi_opts,
|
||||
query_group.name: query_plugin_opts,
|
||||
owm_wx_group.name: owm_wx_opts,
|
||||
avwx_group.name: avwx_opts,
|
||||
location_group.name: location_opts,
|
||||
|
@ -1,4 +1,5 @@
|
||||
import logging
|
||||
from logging import NullHandler
|
||||
from logging.handlers import QueueHandler
|
||||
import queue
|
||||
import sys
|
||||
@ -6,28 +7,12 @@ import sys
|
||||
from loguru import logger
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd.conf import log as conf_log
|
||||
from aprsd import conf
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
# LOG = logging.getLogger("APRSD")
|
||||
LOG = logger
|
||||
|
||||
|
||||
class QueueLatest(queue.Queue):
|
||||
"""Custom Queue to keep only the latest N items.
|
||||
|
||||
This prevents the queue from blowing up in size.
|
||||
"""
|
||||
def put(self, *args, **kwargs):
|
||||
try:
|
||||
super().put(*args, **kwargs)
|
||||
except queue.Full:
|
||||
self.queue.popleft()
|
||||
super().put(*args, **kwargs)
|
||||
|
||||
|
||||
logging_queue = QueueLatest(maxsize=200)
|
||||
LOG = logging.getLogger("APRSD")
|
||||
logging_queue = queue.Queue()
|
||||
|
||||
|
||||
class InterceptHandler(logging.Handler):
|
||||
@ -54,7 +39,7 @@ def setup_logging(loglevel=None, quiet=False):
|
||||
if not loglevel:
|
||||
log_level = CONF.logging.log_level
|
||||
else:
|
||||
log_level = conf_log.LOG_LEVELS[loglevel]
|
||||
log_level = conf.log.LOG_LEVELS[loglevel]
|
||||
|
||||
# intercept everything at the root logger
|
||||
logging.root.handlers = [InterceptHandler()]
|
||||
@ -69,19 +54,9 @@ def setup_logging(loglevel=None, quiet=False):
|
||||
"aprslib.parsing",
|
||||
"aprslib.exceptions",
|
||||
]
|
||||
webserver_list = [
|
||||
"werkzeug",
|
||||
"werkzeug._internal",
|
||||
"socketio",
|
||||
"urllib3.connectionpool",
|
||||
"chardet",
|
||||
"chardet.charsetgroupprober",
|
||||
"chardet.eucjpprober",
|
||||
"chardet.mbcharsetprober",
|
||||
]
|
||||
|
||||
# We don't really want to see the aprslib parsing debug output.
|
||||
disable_list = imap_list + aprslib_list + webserver_list
|
||||
disable_list = imap_list + aprslib_list
|
||||
|
||||
# remove every other logger's handlers
|
||||
# and propagate to root logger
|
||||
@ -92,29 +67,17 @@ def setup_logging(loglevel=None, quiet=False):
|
||||
else:
|
||||
logging.getLogger(name).propagate = True
|
||||
|
||||
if CONF.webchat.disable_url_request_logging:
|
||||
for name in webserver_list:
|
||||
logging.getLogger(name).handlers = []
|
||||
logging.getLogger(name).propagate = True
|
||||
logging.getLogger(name).setLevel(logging.ERROR)
|
||||
|
||||
handlers = [
|
||||
{
|
||||
"sink": sys.stdout,
|
||||
"serialize": False,
|
||||
"sink": sys.stdout, "serialize": False,
|
||||
"format": CONF.logging.logformat,
|
||||
"colorize": True,
|
||||
"level": log_level,
|
||||
},
|
||||
]
|
||||
if CONF.logging.logfile:
|
||||
handlers.append(
|
||||
{
|
||||
"sink": CONF.logging.logfile,
|
||||
"serialize": False,
|
||||
"sink": CONF.logging.logfile, "serialize": False,
|
||||
"format": CONF.logging.logformat,
|
||||
"colorize": False,
|
||||
"level": log_level,
|
||||
},
|
||||
)
|
||||
|
||||
@ -128,11 +91,25 @@ def setup_logging(loglevel=None, quiet=False):
|
||||
{
|
||||
"sink": qh, "serialize": False,
|
||||
"format": CONF.logging.logformat,
|
||||
"level": log_level,
|
||||
"colorize": False,
|
||||
},
|
||||
)
|
||||
|
||||
# configure loguru
|
||||
logger.configure(handlers=handlers)
|
||||
logger.level("DEBUG", color="<fg #BABABA>")
|
||||
|
||||
|
||||
def setup_logging_no_config(loglevel, quiet):
|
||||
log_level = conf.log.LOG_LEVELS[loglevel]
|
||||
LOG.setLevel(log_level)
|
||||
log_format = CONF.logging.logformat
|
||||
date_format = CONF.logging.date_format
|
||||
log_formatter = logging.Formatter(fmt=log_format, datefmt=date_format)
|
||||
fh = NullHandler()
|
||||
|
||||
fh.setFormatter(log_formatter)
|
||||
LOG.addHandler(fh)
|
||||
|
||||
if not quiet:
|
||||
sh = logging.StreamHandler(sys.stdout)
|
||||
sh.setFormatter(log_formatter)
|
||||
LOG.addHandler(sh)
|
||||
|
@ -24,17 +24,18 @@ import datetime
|
||||
import importlib.metadata as imp
|
||||
from importlib.metadata import version as metadata_version
|
||||
import logging
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
|
||||
import click
|
||||
import click_completion
|
||||
from oslo_config import cfg, generator
|
||||
|
||||
# local imports here
|
||||
import aprsd
|
||||
from aprsd import cli_helper, packets, threads, utils
|
||||
from aprsd.stats import collector
|
||||
from aprsd import cli_helper, packets, stats, threads, utils
|
||||
|
||||
|
||||
# setup the global logger
|
||||
@ -43,6 +44,19 @@ CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
|
||||
flask_enabled = False
|
||||
rpc_serv = None
|
||||
|
||||
|
||||
def custom_startswith(string, incomplete):
|
||||
"""A custom completion match that supports case insensitive matching."""
|
||||
if os.environ.get("_CLICK_COMPLETION_COMMAND_CASE_INSENSITIVE_COMPLETE"):
|
||||
string = string.lower()
|
||||
incomplete = incomplete.lower()
|
||||
return string.startswith(incomplete)
|
||||
|
||||
|
||||
click_completion.core.startswith = custom_startswith
|
||||
click_completion.init()
|
||||
|
||||
|
||||
@click.group(cls=cli_helper.AliasedGroup, context_settings=CONTEXT_SETTINGS)
|
||||
@ -54,7 +68,7 @@ def cli(ctx):
|
||||
|
||||
def load_commands():
|
||||
from .cmds import ( # noqa
|
||||
admin, completion, dev, fetch_stats, healthcheck, list_plugins, listen,
|
||||
completion, dev, fetch_stats, healthcheck, list_plugins, listen,
|
||||
send_message, server, webchat,
|
||||
)
|
||||
|
||||
@ -79,15 +93,10 @@ def signal_handler(sig, frame):
|
||||
),
|
||||
)
|
||||
time.sleep(1.5)
|
||||
try:
|
||||
packets.PacketTrack().save()
|
||||
packets.WatchList().save()
|
||||
packets.SeenList().save()
|
||||
packets.PacketList().save()
|
||||
collector.Collector().collect()
|
||||
except Exception as e:
|
||||
LOG.error(f"Failed to save data: {e}")
|
||||
sys.exit(0)
|
||||
packets.PacketTrack().save()
|
||||
packets.WatchList().save()
|
||||
packets.SeenList().save()
|
||||
LOG.info(stats.APRSDStats())
|
||||
# signal.signal(signal.SIGTERM, sys.exit(0))
|
||||
# sys.exit(0)
|
||||
|
||||
@ -113,25 +122,10 @@ def check_version(ctx):
|
||||
def sample_config(ctx):
|
||||
"""Generate a sample Config file from aprsd and all installed plugins."""
|
||||
|
||||
def _get_selected_entry_points():
|
||||
import sys
|
||||
if sys.version_info < (3, 10):
|
||||
all = imp.entry_points()
|
||||
selected = []
|
||||
if "oslo.config.opts" in all:
|
||||
for x in all["oslo.config.opts"]:
|
||||
if x.group == "oslo.config.opts":
|
||||
selected.append(x)
|
||||
else:
|
||||
selected = imp.entry_points(group="oslo.config.opts")
|
||||
|
||||
return selected
|
||||
|
||||
def get_namespaces():
|
||||
args = []
|
||||
|
||||
# selected = imp.entry_points(group="oslo.config.opts")
|
||||
selected = _get_selected_entry_points()
|
||||
selected = imp.entry_points(group="oslo.config.opts")
|
||||
for entry in selected:
|
||||
if "aprsd" in entry.name:
|
||||
args.append("--namespace")
|
||||
@ -151,6 +145,7 @@ def sample_config(ctx):
|
||||
if not sys.argv[1:]:
|
||||
raise SystemExit
|
||||
raise
|
||||
LOG.warning(conf.namespace)
|
||||
generator.generate(conf)
|
||||
return
|
||||
|
||||
|
4
aprsd/messaging.py
Normal file
4
aprsd/messaging.py
Normal file
@ -0,0 +1,4 @@
|
||||
# What to return from a plugin if we have processed the message
|
||||
# and it's ok, but don't send a usage string back
|
||||
|
||||
# REMOVE THIS FILE
|
@ -1,8 +1,6 @@
|
||||
from aprsd.packets import collector
|
||||
from aprsd.packets.core import ( # noqa: F401
|
||||
AckPacket, BeaconPacket, BulletinPacket, GPSPacket, MessagePacket,
|
||||
MicEPacket, ObjectPacket, Packet, RejectPacket, StatusPacket,
|
||||
ThirdPartyPacket, UnknownPacket, WeatherPacket, factory,
|
||||
AckPacket, BeaconPacket, GPSPacket, MessagePacket, MicEPacket, Packet,
|
||||
RejectPacket, StatusPacket, WeatherPacket,
|
||||
)
|
||||
from aprsd.packets.packet_list import PacketList # noqa: F401
|
||||
from aprsd.packets.seen_list import SeenList # noqa: F401
|
||||
@ -10,11 +8,4 @@ from aprsd.packets.tracker import PacketTrack # noqa: F401
|
||||
from aprsd.packets.watch_list import WatchList # noqa: F401
|
||||
|
||||
|
||||
# Register all the packet tracking objects.
|
||||
collector.PacketCollector().register(PacketList)
|
||||
collector.PacketCollector().register(SeenList)
|
||||
collector.PacketCollector().register(PacketTrack)
|
||||
collector.PacketCollector().register(WatchList)
|
||||
|
||||
|
||||
NULL_MESSAGE = -1
|
||||
|
@ -1,79 +0,0 @@
|
||||
import logging
|
||||
from typing import Callable, Protocol, runtime_checkable
|
||||
|
||||
from aprsd.packets import core
|
||||
from aprsd.utils import singleton
|
||||
|
||||
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class PacketMonitor(Protocol):
|
||||
"""Protocol for Monitoring packets in some way."""
|
||||
|
||||
def rx(self, packet: type[core.Packet]) -> None:
|
||||
"""When we get a packet from the network."""
|
||||
...
|
||||
|
||||
def tx(self, packet: type[core.Packet]) -> None:
|
||||
"""When we send a packet out the network."""
|
||||
...
|
||||
|
||||
def flush(self) -> None:
|
||||
"""Flush out any data."""
|
||||
...
|
||||
|
||||
def load(self) -> None:
|
||||
"""Load any data."""
|
||||
...
|
||||
|
||||
|
||||
@singleton
|
||||
class PacketCollector:
|
||||
def __init__(self):
|
||||
self.monitors: list[Callable] = []
|
||||
|
||||
def register(self, monitor: Callable) -> None:
|
||||
if not isinstance(monitor, PacketMonitor):
|
||||
raise TypeError(f"Monitor {monitor} is not a PacketMonitor")
|
||||
self.monitors.append(monitor)
|
||||
|
||||
def unregister(self, monitor: Callable) -> None:
|
||||
if not isinstance(monitor, PacketMonitor):
|
||||
raise TypeError(f"Monitor {monitor} is not a PacketMonitor")
|
||||
self.monitors.remove(monitor)
|
||||
|
||||
def rx(self, packet: type[core.Packet]) -> None:
|
||||
for name in self.monitors:
|
||||
cls = name()
|
||||
try:
|
||||
cls.rx(packet)
|
||||
except Exception as e:
|
||||
LOG.error(f"Error in monitor {name} (rx): {e}")
|
||||
|
||||
def tx(self, packet: type[core.Packet]) -> None:
|
||||
for name in self.monitors:
|
||||
cls = name()
|
||||
try:
|
||||
cls.tx(packet)
|
||||
except Exception as e:
|
||||
LOG.error(f"Error in monitor {name} (tx): {e}")
|
||||
|
||||
def flush(self):
|
||||
"""Call flush on the objects. This is used to flush out any data."""
|
||||
for name in self.monitors:
|
||||
cls = name()
|
||||
try:
|
||||
cls.flush()
|
||||
except Exception as e:
|
||||
LOG.error(f"Error in monitor {name} (flush): {e}")
|
||||
|
||||
def load(self):
|
||||
"""Call load on the objects. This is used to load any data."""
|
||||
for name in self.monitors:
|
||||
cls = name()
|
||||
try:
|
||||
cls.load()
|
||||
except Exception as e:
|
||||
LOG.error(f"Error in monitor {name} (load): {e}")
|
File diff suppressed because it is too large
Load Diff
@ -1,161 +0,0 @@
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from geopy.distance import geodesic
|
||||
from loguru import logger
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd import utils
|
||||
from aprsd.packets.core import AckPacket, GPSPacket, RejectPacket
|
||||
|
||||
|
||||
LOG = logging.getLogger()
|
||||
LOGU = logger
|
||||
CONF = cfg.CONF
|
||||
|
||||
FROM_COLOR = "fg #C70039"
|
||||
TO_COLOR = "fg #D033FF"
|
||||
TX_COLOR = "red"
|
||||
RX_COLOR = "green"
|
||||
PACKET_COLOR = "cyan"
|
||||
DISTANCE_COLOR = "fg #FF5733"
|
||||
DEGREES_COLOR = "fg #FFA900"
|
||||
|
||||
|
||||
def log_multiline(packet, tx: Optional[bool] = False, header: Optional[bool] = True) -> None:
|
||||
"""LOG a packet to the logfile."""
|
||||
if not CONF.enable_packet_logging:
|
||||
return
|
||||
if CONF.log_packet_format == "compact":
|
||||
return
|
||||
|
||||
# asdict(packet)
|
||||
logit = ["\n"]
|
||||
name = packet.__class__.__name__
|
||||
|
||||
if isinstance(packet, AckPacket):
|
||||
pkt_max_send_count = CONF.default_ack_send_count
|
||||
else:
|
||||
pkt_max_send_count = CONF.default_packet_send_count
|
||||
|
||||
if header:
|
||||
if tx:
|
||||
header_str = f"<{TX_COLOR}>TX</{TX_COLOR}>"
|
||||
logit.append(
|
||||
f"{header_str}________(<{PACKET_COLOR}>{name}</{PACKET_COLOR}> "
|
||||
f"TX:{packet.send_count + 1} of {pkt_max_send_count}",
|
||||
)
|
||||
else:
|
||||
header_str = f"<{RX_COLOR}>RX</{RX_COLOR}>"
|
||||
logit.append(
|
||||
f"{header_str}________(<{PACKET_COLOR}>{name}</{PACKET_COLOR}>)",
|
||||
)
|
||||
|
||||
else:
|
||||
header_str = ""
|
||||
logit.append(f"__________(<{PACKET_COLOR}>{name}</{PACKET_COLOR}>)")
|
||||
# log_list.append(f" Packet : {packet.__class__.__name__}")
|
||||
if packet.msgNo:
|
||||
logit.append(f" Msg # : {packet.msgNo}")
|
||||
if packet.from_call:
|
||||
logit.append(f" From : <{FROM_COLOR}>{packet.from_call}</{FROM_COLOR}>")
|
||||
if packet.to_call:
|
||||
logit.append(f" To : <{TO_COLOR}>{packet.to_call}</{TO_COLOR}>")
|
||||
if hasattr(packet, "path") and packet.path:
|
||||
logit.append(f" Path : {'=>'.join(packet.path)}")
|
||||
if hasattr(packet, "via") and packet.via:
|
||||
logit.append(f" VIA : {packet.via}")
|
||||
|
||||
if not isinstance(packet, AckPacket) and not isinstance(packet, RejectPacket):
|
||||
msg = packet.human_info
|
||||
|
||||
if msg:
|
||||
msg = msg.replace("<", "\\<")
|
||||
logit.append(f" Info : <light-yellow><b>{msg}</b></light-yellow>")
|
||||
|
||||
if hasattr(packet, "comment") and packet.comment:
|
||||
logit.append(f" Comment : {packet.comment}")
|
||||
|
||||
raw = packet.raw.replace("<", "\\<")
|
||||
logit.append(f" Raw : <fg #828282>{raw}</fg #828282>")
|
||||
logit.append(f"{header_str}________(<{PACKET_COLOR}>{name}</{PACKET_COLOR}>)")
|
||||
|
||||
LOGU.opt(colors=True).info("\n".join(logit))
|
||||
LOG.debug(repr(packet))
|
||||
|
||||
|
||||
def log(packet, tx: Optional[bool] = False, header: Optional[bool] = True) -> None:
|
||||
if not CONF.enable_packet_logging:
|
||||
return
|
||||
if CONF.log_packet_format == "multiline":
|
||||
log_multiline(packet, tx, header)
|
||||
return
|
||||
|
||||
logit = []
|
||||
name = packet.__class__.__name__
|
||||
if isinstance(packet, AckPacket):
|
||||
pkt_max_send_count = CONF.default_ack_send_count
|
||||
else:
|
||||
pkt_max_send_count = CONF.default_packet_send_count
|
||||
|
||||
if header:
|
||||
if tx:
|
||||
via_color = "red"
|
||||
arrow = f"<{via_color}>\u2192</{via_color}>"
|
||||
logit.append(
|
||||
f"<red>TX\u2191</red> "
|
||||
f"<cyan>{name}</cyan>"
|
||||
f":{packet.msgNo}"
|
||||
f" ({packet.send_count + 1} of {pkt_max_send_count})",
|
||||
)
|
||||
else:
|
||||
via_color = "fg #1AA730"
|
||||
arrow = f"<{via_color}>\u2192</{via_color}>"
|
||||
f"<{via_color}><-</{via_color}>"
|
||||
logit.append(
|
||||
f"<fg #1AA730>RX\u2193</fg #1AA730> "
|
||||
f"<cyan>{name}</cyan>"
|
||||
f":{packet.msgNo}",
|
||||
)
|
||||
else:
|
||||
via_color = "green"
|
||||
arrow = f"<{via_color}>-></{via_color}>"
|
||||
logit.append(
|
||||
f"<cyan>{name}</cyan>"
|
||||
f":{packet.msgNo}",
|
||||
)
|
||||
|
||||
tmp = None
|
||||
if packet.path:
|
||||
tmp = f"{arrow}".join(packet.path) + f"{arrow} "
|
||||
|
||||
logit.append(
|
||||
f"<{FROM_COLOR}>{packet.from_call}</{FROM_COLOR}> {arrow}"
|
||||
f"{tmp if tmp else ' '}"
|
||||
f"<{TO_COLOR}>{packet.to_call}</{TO_COLOR}>",
|
||||
)
|
||||
|
||||
if not isinstance(packet, AckPacket) and not isinstance(packet, RejectPacket):
|
||||
logit.append(":")
|
||||
msg = packet.human_info
|
||||
|
||||
if msg:
|
||||
msg = msg.replace("<", "\\<")
|
||||
logit.append(f"<light-yellow><b>{msg}</b></light-yellow>")
|
||||
|
||||
# is there distance information?
|
||||
if isinstance(packet, GPSPacket) and CONF.latitude and CONF.longitude:
|
||||
my_coords = (CONF.latitude, CONF.longitude)
|
||||
packet_coords = (packet.latitude, packet.longitude)
|
||||
try:
|
||||
bearing = utils.calculate_initial_compass_bearing(my_coords, packet_coords)
|
||||
except Exception as e:
|
||||
LOG.error(f"Failed to calculate bearing: {e}")
|
||||
bearing = 0
|
||||
logit.append(
|
||||
f" : <{DEGREES_COLOR}>{utils.degrees_to_cardinal(bearing, full_string=True)}</{DEGREES_COLOR}>"
|
||||
f"<{DISTANCE_COLOR}>@{geodesic(my_coords, packet_coords).miles:.2f}miles</{DISTANCE_COLOR}>",
|
||||
)
|
||||
|
||||
LOGU.opt(colors=True).info(" ".join(logit))
|
||||
log_multiline(packet, tx, header)
|
@ -1,100 +1,99 @@
|
||||
from collections import OrderedDict
|
||||
from collections.abc import MutableMapping
|
||||
import logging
|
||||
import threading
|
||||
|
||||
from oslo_config import cfg
|
||||
import wrapt
|
||||
|
||||
from aprsd.packets import core
|
||||
from aprsd.utils import objectstore
|
||||
from aprsd import stats
|
||||
from aprsd.packets import seen_list
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
class PacketList(objectstore.ObjectStoreMixin):
|
||||
"""Class to keep track of the packets we tx/rx."""
|
||||
class PacketList(MutableMapping):
|
||||
_instance = None
|
||||
lock = threading.Lock()
|
||||
_total_rx: int = 0
|
||||
_total_tx: int = 0
|
||||
maxlen: int = 100
|
||||
types = {}
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
cls._instance.maxlen = CONF.packet_list_maxlen
|
||||
cls._instance._init_data()
|
||||
cls._maxlen = 100
|
||||
cls.d = OrderedDict()
|
||||
return cls._instance
|
||||
|
||||
def _init_data(self):
|
||||
self.data = {
|
||||
"types": {},
|
||||
"packets": OrderedDict(),
|
||||
}
|
||||
|
||||
def rx(self, packet: type[core.Packet]):
|
||||
@wrapt.synchronized(lock)
|
||||
def rx(self, packet):
|
||||
"""Add a packet that was received."""
|
||||
with self.lock:
|
||||
self._total_rx += 1
|
||||
self._add(packet)
|
||||
ptype = packet.__class__.__name__
|
||||
type_stats = self.data["types"].setdefault(
|
||||
ptype, {"tx": 0, "rx": 0},
|
||||
)
|
||||
type_stats["rx"] += 1
|
||||
self._total_rx += 1
|
||||
self._add(packet)
|
||||
ptype = packet.__class__.__name__
|
||||
if not ptype in self.types:
|
||||
self.types[ptype] = {"tx": 0, "rx": 0}
|
||||
self.types[ptype]["rx"] += 1
|
||||
seen_list.SeenList().update_seen(packet)
|
||||
stats.APRSDStats().rx(packet)
|
||||
|
||||
def tx(self, packet: type[core.Packet]):
|
||||
@wrapt.synchronized(lock)
|
||||
def tx(self, packet):
|
||||
"""Add a packet that was received."""
|
||||
with self.lock:
|
||||
self._total_tx += 1
|
||||
self._add(packet)
|
||||
ptype = packet.__class__.__name__
|
||||
type_stats = self.data["types"].setdefault(
|
||||
ptype, {"tx": 0, "rx": 0},
|
||||
)
|
||||
type_stats["tx"] += 1
|
||||
self._total_tx += 1
|
||||
self._add(packet)
|
||||
ptype = packet.__class__.__name__
|
||||
if not ptype in self.types:
|
||||
self.types[ptype] = {"tx": 0, "rx": 0}
|
||||
self.types[ptype]["tx"] += 1
|
||||
seen_list.SeenList().update_seen(packet)
|
||||
stats.APRSDStats().tx(packet)
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def add(self, packet):
|
||||
with self.lock:
|
||||
self._add(packet)
|
||||
self._add(packet)
|
||||
|
||||
def _add(self, packet):
|
||||
if not self.data.get("packets"):
|
||||
self._init_data()
|
||||
if packet.key in self.data["packets"]:
|
||||
self.data["packets"].move_to_end(packet.key)
|
||||
elif len(self.data["packets"]) == self.maxlen:
|
||||
self.data["packets"].popitem(last=False)
|
||||
self.data["packets"][packet.key] = packet
|
||||
self[packet.key] = packet
|
||||
|
||||
def copy(self):
|
||||
return self.d.copy()
|
||||
|
||||
@property
|
||||
def maxlen(self):
|
||||
return self._maxlen
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def find(self, packet):
|
||||
with self.lock:
|
||||
return self.data["packets"][packet.key]
|
||||
return self.get(packet.key)
|
||||
|
||||
def __getitem__(self, key):
|
||||
# self.d.move_to_end(key)
|
||||
return self.d[key]
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
if key in self.d:
|
||||
self.d.move_to_end(key)
|
||||
elif len(self.d) == self.maxlen:
|
||||
self.d.popitem(last=False)
|
||||
self.d[key] = value
|
||||
|
||||
def __delitem__(self, key):
|
||||
del self.d[key]
|
||||
|
||||
def __iter__(self):
|
||||
return self.d.__iter__()
|
||||
|
||||
def __len__(self):
|
||||
with self.lock:
|
||||
return len(self.data["packets"])
|
||||
return len(self.d)
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def total_rx(self):
|
||||
with self.lock:
|
||||
return self._total_rx
|
||||
return self._total_rx
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def total_tx(self):
|
||||
with self.lock:
|
||||
return self._total_tx
|
||||
|
||||
def stats(self, serializable=False) -> dict:
|
||||
with self.lock:
|
||||
# Get last N packets directly using list slicing
|
||||
packets_list = list(self.data.get("packets", {}).values())
|
||||
pkts = packets_list[-CONF.packet_list_stats_maxlen:][::-1]
|
||||
|
||||
stats = {
|
||||
"total_tracked": self._total_rx + self._total_tx, # Fixed typo: was rx + rx
|
||||
"rx": self._total_rx,
|
||||
"tx": self._total_tx,
|
||||
"types": self.data.get("types", {}), # Changed default from [] to {}
|
||||
"packet_count": len(self.data.get("packets", [])),
|
||||
"maxlen": self.maxlen,
|
||||
"packets": pkts,
|
||||
}
|
||||
return stats
|
||||
return self._total_tx
|
||||
|
@ -1,9 +1,10 @@
|
||||
import datetime
|
||||
import logging
|
||||
import threading
|
||||
|
||||
from oslo_config import cfg
|
||||
import wrapt
|
||||
|
||||
from aprsd.packets import core
|
||||
from aprsd.utils import objectstore
|
||||
|
||||
|
||||
@ -15,35 +16,28 @@ class SeenList(objectstore.ObjectStoreMixin):
|
||||
"""Global callsign seen list."""
|
||||
|
||||
_instance = None
|
||||
lock = threading.Lock()
|
||||
data: dict = {}
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
cls._instance._init_store()
|
||||
cls._instance.data = {}
|
||||
return cls._instance
|
||||
|
||||
def stats(self, serializable=False):
|
||||
"""Return the stats for the PacketTrack class."""
|
||||
with self.lock:
|
||||
return self.data
|
||||
|
||||
def rx(self, packet: type[core.Packet]):
|
||||
"""When we get a packet from the network, update the seen list."""
|
||||
with self.lock:
|
||||
callsign = None
|
||||
if packet.from_call:
|
||||
callsign = packet.from_call
|
||||
else:
|
||||
LOG.warning(f"Can't find FROM in packet {packet}")
|
||||
return
|
||||
if callsign not in self.data:
|
||||
self.data[callsign] = {
|
||||
"last": None,
|
||||
"count": 0,
|
||||
}
|
||||
self.data[callsign]["last"] = datetime.datetime.now()
|
||||
self.data[callsign]["count"] += 1
|
||||
|
||||
def tx(self, packet: type[core.Packet]):
|
||||
"""We don't care about TX packets."""
|
||||
@wrapt.synchronized(lock)
|
||||
def update_seen(self, packet):
|
||||
callsign = None
|
||||
if packet.from_call:
|
||||
callsign = packet.from_call
|
||||
else:
|
||||
LOG.warning(f"Can't find FROM in packet {packet}")
|
||||
return
|
||||
if callsign not in self.data:
|
||||
self.data[callsign] = {
|
||||
"last": None,
|
||||
"count": 0,
|
||||
}
|
||||
self.data[callsign]["last"] = str(datetime.datetime.now())
|
||||
self.data[callsign]["count"] += 1
|
||||
|
@ -1,14 +1,14 @@
|
||||
import datetime
|
||||
import logging
|
||||
import threading
|
||||
|
||||
from oslo_config import cfg
|
||||
import wrapt
|
||||
|
||||
from aprsd.packets import core
|
||||
from aprsd.threads import tx
|
||||
from aprsd.utils import objectstore
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
class PacketTrack(objectstore.ObjectStoreMixin):
|
||||
@ -26,6 +26,7 @@ class PacketTrack(objectstore.ObjectStoreMixin):
|
||||
|
||||
_instance = None
|
||||
_start_time = None
|
||||
lock = threading.Lock()
|
||||
|
||||
data: dict = {}
|
||||
total_tracked: int = 0
|
||||
@ -37,67 +38,74 @@ class PacketTrack(objectstore.ObjectStoreMixin):
|
||||
cls._instance._init_store()
|
||||
return cls._instance
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def __getitem__(self, name):
|
||||
with self.lock:
|
||||
return self.data[name]
|
||||
return self.data[name]
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def __iter__(self):
|
||||
with self.lock:
|
||||
return iter(self.data)
|
||||
return iter(self.data)
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def keys(self):
|
||||
with self.lock:
|
||||
return self.data.keys()
|
||||
return self.data.keys()
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def items(self):
|
||||
with self.lock:
|
||||
return self.data.items()
|
||||
return self.data.items()
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def values(self):
|
||||
with self.lock:
|
||||
return self.data.values()
|
||||
return self.data.values()
|
||||
|
||||
def stats(self, serializable=False):
|
||||
with self.lock:
|
||||
stats = {
|
||||
"total_tracked": self.total_tracked,
|
||||
}
|
||||
pkts = {}
|
||||
for key in self.data:
|
||||
last_send_time = self.data[key].last_send_time
|
||||
pkts[key] = {
|
||||
"last_send_time": last_send_time,
|
||||
"send_count": self.data[key].send_count,
|
||||
"retry_count": self.data[key].retry_count,
|
||||
"message": self.data[key].raw,
|
||||
}
|
||||
stats["packets"] = pkts
|
||||
return stats
|
||||
@wrapt.synchronized(lock)
|
||||
def __len__(self):
|
||||
return len(self.data)
|
||||
|
||||
def rx(self, packet: type[core.Packet]) -> None:
|
||||
"""When we get a packet from the network, check if we should remove it."""
|
||||
if isinstance(packet, core.AckPacket):
|
||||
self._remove(packet.msgNo)
|
||||
elif isinstance(packet, core.RejectPacket):
|
||||
self._remove(packet.msgNo)
|
||||
elif hasattr(packet, "ackMsgNo"):
|
||||
# Got a piggyback ack, so remove the original message
|
||||
self._remove(packet.ackMsgNo)
|
||||
@wrapt.synchronized(lock)
|
||||
def add(self, packet):
|
||||
key = packet.msgNo
|
||||
packet._last_send_attempt = 0
|
||||
self.data[key] = packet
|
||||
self.total_tracked += 1
|
||||
|
||||
def tx(self, packet: type[core.Packet]) -> None:
|
||||
"""Add a packet that was sent."""
|
||||
with self.lock:
|
||||
key = packet.msgNo
|
||||
packet.send_count = 0
|
||||
self.data[key] = packet
|
||||
self.total_tracked += 1
|
||||
@wrapt.synchronized(lock)
|
||||
def get(self, key):
|
||||
return self.data.get(key, None)
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def remove(self, key):
|
||||
self._remove(key)
|
||||
try:
|
||||
del self.data[key]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
def _remove(self, key):
|
||||
with self.lock:
|
||||
try:
|
||||
del self.data[key]
|
||||
except KeyError:
|
||||
pass
|
||||
def restart(self):
|
||||
"""Walk the list of messages and restart them if any."""
|
||||
for key in self.data.keys():
|
||||
pkt = self.data[key]
|
||||
if pkt._last_send_attempt < pkt.retry_count:
|
||||
tx.send(pkt)
|
||||
|
||||
def _resend(self, packet):
|
||||
packet._last_send_attempt = 0
|
||||
tx.send(packet)
|
||||
|
||||
def restart_delayed(self, count=None, most_recent=True):
|
||||
"""Walk the list of delayed messages and restart them if any."""
|
||||
if not count:
|
||||
# Send all the delayed messages
|
||||
for key in self.data.keys():
|
||||
pkt = self.data[key]
|
||||
if pkt._last_send_attempt == pkt._retry_count:
|
||||
self._resend(pkt)
|
||||
else:
|
||||
# They want to resend <count> delayed messages
|
||||
tmp = sorted(
|
||||
self.data.items(),
|
||||
reverse=most_recent,
|
||||
key=lambda x: x[1].last_send_time,
|
||||
)
|
||||
pkt_list = tmp[:count]
|
||||
for (_key, pkt) in pkt_list:
|
||||
self._resend(pkt)
|
||||
|
@ -1,10 +1,11 @@
|
||||
import datetime
|
||||
import logging
|
||||
import threading
|
||||
|
||||
from oslo_config import cfg
|
||||
import wrapt
|
||||
|
||||
from aprsd import utils
|
||||
from aprsd.packets import core
|
||||
from aprsd.utils import objectstore
|
||||
|
||||
|
||||
@ -16,75 +17,56 @@ class WatchList(objectstore.ObjectStoreMixin):
|
||||
"""Global watch list and info for callsigns."""
|
||||
|
||||
_instance = None
|
||||
lock = threading.Lock()
|
||||
data = {}
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
cls._instance._init_store()
|
||||
cls._instance.data = {}
|
||||
return cls._instance
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self._update_from_conf()
|
||||
def __init__(self, config=None):
|
||||
ring_size = CONF.watch_list.packet_keep_count
|
||||
|
||||
def _update_from_conf(self, config=None):
|
||||
with self.lock:
|
||||
if CONF.watch_list.enabled and CONF.watch_list.callsigns:
|
||||
for callsign in CONF.watch_list.callsigns:
|
||||
call = callsign.replace("*", "")
|
||||
# FIXME(waboring) - we should fetch the last time we saw
|
||||
# a beacon from a callsign or some other mechanism to find
|
||||
# last time a message was seen by aprs-is. For now this
|
||||
# is all we can do.
|
||||
if call not in self.data:
|
||||
self.data[call] = {
|
||||
"last": None,
|
||||
"packet": None,
|
||||
}
|
||||
|
||||
def stats(self, serializable=False) -> dict:
|
||||
stats = {}
|
||||
with self.lock:
|
||||
for callsign in self.data:
|
||||
stats[callsign] = {
|
||||
"last": self.data[callsign]["last"],
|
||||
"packet": self.data[callsign]["packet"],
|
||||
"age": self.age(callsign),
|
||||
"old": self.is_old(callsign),
|
||||
if CONF.watch_list.callsigns:
|
||||
for callsign in CONF.watch_list.callsigns:
|
||||
call = callsign.replace("*", "")
|
||||
# FIXME(waboring) - we should fetch the last time we saw
|
||||
# a beacon from a callsign or some other mechanism to find
|
||||
# last time a message was seen by aprs-is. For now this
|
||||
# is all we can do.
|
||||
self.data[call] = {
|
||||
"last": datetime.datetime.now(),
|
||||
"packets": utils.RingBuffer(
|
||||
ring_size,
|
||||
),
|
||||
}
|
||||
return stats
|
||||
|
||||
def is_enabled(self):
|
||||
return CONF.watch_list.enabled
|
||||
|
||||
def callsign_in_watchlist(self, callsign):
|
||||
with self.lock:
|
||||
return callsign in self.data
|
||||
|
||||
def rx(self, packet: type[core.Packet]) -> None:
|
||||
"""Track when we got a packet from the network."""
|
||||
callsign = packet.from_call
|
||||
return callsign in self.data
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def update_seen(self, packet):
|
||||
if packet.addresse:
|
||||
callsign = packet.addresse
|
||||
else:
|
||||
callsign = packet.from_call
|
||||
if self.callsign_in_watchlist(callsign):
|
||||
with self.lock:
|
||||
self.data[callsign]["last"] = datetime.datetime.now()
|
||||
self.data[callsign]["packet"] = packet
|
||||
|
||||
def tx(self, packet: type[core.Packet]) -> None:
|
||||
"""We don't care about TX packets."""
|
||||
self.data[callsign]["last"] = datetime.datetime.now()
|
||||
self.data[callsign]["packets"].append(packet)
|
||||
|
||||
def last_seen(self, callsign):
|
||||
with self.lock:
|
||||
if self.callsign_in_watchlist(callsign):
|
||||
return self.data[callsign]["last"]
|
||||
if self.callsign_in_watchlist(callsign):
|
||||
return self.data[callsign]["last"]
|
||||
|
||||
def age(self, callsign):
|
||||
now = datetime.datetime.now()
|
||||
last_seen_time = self.last_seen(callsign)
|
||||
if last_seen_time:
|
||||
return str(now - last_seen_time)
|
||||
else:
|
||||
return None
|
||||
return str(now - self.last_seen(callsign))
|
||||
|
||||
def max_delta(self, seconds=None):
|
||||
if not seconds:
|
||||
@ -101,19 +83,14 @@ class WatchList(objectstore.ObjectStoreMixin):
|
||||
We put this here so any notification plugin can use this
|
||||
same test.
|
||||
"""
|
||||
if not self.callsign_in_watchlist(callsign):
|
||||
return False
|
||||
|
||||
age = self.age(callsign)
|
||||
if age:
|
||||
delta = utils.parse_delta_str(age)
|
||||
d = datetime.timedelta(**delta)
|
||||
|
||||
max_delta = self.max_delta(seconds=seconds)
|
||||
delta = utils.parse_delta_str(age)
|
||||
d = datetime.timedelta(**delta)
|
||||
|
||||
if d > max_delta:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
max_delta = self.max_delta(seconds=seconds)
|
||||
|
||||
if d > max_delta:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
@ -1,5 +1,4 @@
|
||||
from __future__ import annotations
|
||||
|
||||
# The base plugin class
|
||||
import abc
|
||||
import importlib
|
||||
import inspect
|
||||
@ -25,6 +24,7 @@ CORE_MESSAGE_PLUGINS = [
|
||||
"aprsd.plugins.fortune.FortunePlugin",
|
||||
"aprsd.plugins.location.LocationPlugin",
|
||||
"aprsd.plugins.ping.PingPlugin",
|
||||
"aprsd.plugins.query.QueryPlugin",
|
||||
"aprsd.plugins.time.TimePlugin",
|
||||
"aprsd.plugins.weather.USWeatherPlugin",
|
||||
"aprsd.plugins.version.VersionPlugin",
|
||||
@ -42,7 +42,7 @@ class APRSDPluginSpec:
|
||||
"""A hook specification namespace."""
|
||||
|
||||
@hookspec
|
||||
def filter(self, packet: type[packets.Packet]):
|
||||
def filter(self, packet: packets.core.Packet):
|
||||
"""My special little hook that you can customize."""
|
||||
|
||||
|
||||
@ -65,7 +65,7 @@ class APRSDPluginBase(metaclass=abc.ABCMeta):
|
||||
self.threads = self.create_threads() or []
|
||||
self.start_threads()
|
||||
|
||||
def start_threads(self) -> None:
|
||||
def start_threads(self):
|
||||
if self.enabled and self.threads:
|
||||
if not isinstance(self.threads, list):
|
||||
self.threads = [self.threads]
|
||||
@ -90,10 +90,10 @@ class APRSDPluginBase(metaclass=abc.ABCMeta):
|
||||
)
|
||||
|
||||
@property
|
||||
def message_count(self) -> int:
|
||||
def message_count(self):
|
||||
return self.message_counter
|
||||
|
||||
def help(self) -> str:
|
||||
def help(self):
|
||||
return "Help!"
|
||||
|
||||
@abc.abstractmethod
|
||||
@ -118,11 +118,11 @@ class APRSDPluginBase(metaclass=abc.ABCMeta):
|
||||
thread.stop()
|
||||
|
||||
@abc.abstractmethod
|
||||
def filter(self, packet: type[packets.Packet]) -> str | packets.MessagePacket:
|
||||
def filter(self, packet: packets.core.Packet):
|
||||
pass
|
||||
|
||||
@abc.abstractmethod
|
||||
def process(self, packet: type[packets.Packet]):
|
||||
def process(self, packet: packets.core.Packet):
|
||||
"""This is called when the filter passes."""
|
||||
|
||||
|
||||
@ -147,14 +147,14 @@ class APRSDWatchListPluginBase(APRSDPluginBase, metaclass=abc.ABCMeta):
|
||||
watch_list = CONF.watch_list.callsigns
|
||||
# make sure the timeout is set or this doesn't work
|
||||
if watch_list:
|
||||
aprs_client = client.client_factory.create().client
|
||||
aprs_client = client.factory.create().client
|
||||
filter_str = "b/{}".format("/".join(watch_list))
|
||||
aprs_client.set_filter(filter_str)
|
||||
else:
|
||||
LOG.warning("Watch list enabled, but no callsigns set.")
|
||||
|
||||
@hookimpl
|
||||
def filter(self, packet: type[packets.Packet]) -> str | packets.MessagePacket:
|
||||
def filter(self, packet: packets.core.Packet):
|
||||
result = packets.NULL_MESSAGE
|
||||
if self.enabled:
|
||||
wl = watch_list.WatchList()
|
||||
@ -206,14 +206,14 @@ class APRSDRegexCommandPluginBase(APRSDPluginBase, metaclass=abc.ABCMeta):
|
||||
self.enabled = True
|
||||
|
||||
@hookimpl
|
||||
def filter(self, packet: packets.MessagePacket) -> str | packets.MessagePacket:
|
||||
LOG.debug(f"{self.__class__.__name__} called")
|
||||
def filter(self, packet: packets.core.MessagePacket):
|
||||
LOG.info(f"{self.__class__.__name__} called")
|
||||
if not self.enabled:
|
||||
result = f"{self.__class__.__name__} isn't enabled"
|
||||
LOG.warning(result)
|
||||
return result
|
||||
|
||||
if not isinstance(packet, packets.MessagePacket):
|
||||
if not isinstance(packet, packets.core.MessagePacket):
|
||||
LOG.warning(f"{self.__class__.__name__} Got a {packet.__class__.__name__} ignoring")
|
||||
return packets.NULL_MESSAGE
|
||||
|
||||
@ -226,7 +226,7 @@ class APRSDRegexCommandPluginBase(APRSDPluginBase, metaclass=abc.ABCMeta):
|
||||
# and is an APRS message format and has a message.
|
||||
if (
|
||||
tocall == CONF.callsign
|
||||
and isinstance(packet, packets.MessagePacket)
|
||||
and isinstance(packet, packets.core.MessagePacket)
|
||||
and message
|
||||
):
|
||||
if re.search(self.command_regex, message, re.IGNORECASE):
|
||||
@ -269,7 +269,7 @@ class HelpPlugin(APRSDRegexCommandPluginBase):
|
||||
def help(self):
|
||||
return "Help: send APRS help or help <plugin>"
|
||||
|
||||
def process(self, packet: packets.MessagePacket):
|
||||
def process(self, packet: packets.core.MessagePacket):
|
||||
LOG.info("HelpPlugin")
|
||||
# fromcall = packet.get("from")
|
||||
message = packet.message_text
|
||||
@ -343,28 +343,6 @@ class PluginManager:
|
||||
self._watchlist_pm = pluggy.PluginManager("aprsd")
|
||||
self._watchlist_pm.add_hookspecs(APRSDPluginSpec)
|
||||
|
||||
def stats(self, serializable=False) -> dict:
|
||||
"""Collect and return stats for all plugins."""
|
||||
def full_name_with_qualname(obj):
|
||||
return "{}.{}".format(
|
||||
obj.__class__.__module__,
|
||||
obj.__class__.__qualname__,
|
||||
)
|
||||
|
||||
plugin_stats = {}
|
||||
plugins = self.get_plugins()
|
||||
if plugins:
|
||||
|
||||
for p in plugins:
|
||||
plugin_stats[full_name_with_qualname(p)] = {
|
||||
"enabled": p.enabled,
|
||||
"rx": p.rx_count,
|
||||
"tx": p.tx_count,
|
||||
"version": p.version,
|
||||
}
|
||||
|
||||
return plugin_stats
|
||||
|
||||
def is_plugin(self, obj):
|
||||
for c in inspect.getmro(obj):
|
||||
if issubclass(c, APRSDPluginBase):
|
||||
@ -390,9 +368,7 @@ class PluginManager:
|
||||
try:
|
||||
module_name, class_name = module_class_string.rsplit(".", 1)
|
||||
module = importlib.import_module(module_name)
|
||||
# Commented out because the email thread starts in a different context
|
||||
# and hence gives a different singleton for the EmailStats
|
||||
# module = importlib.reload(module)
|
||||
module = importlib.reload(module)
|
||||
except Exception as ex:
|
||||
if not module_name:
|
||||
LOG.error(f"Failed to load Plugin {module_class_string}")
|
||||
@ -472,10 +448,7 @@ class PluginManager:
|
||||
del self._pluggy_pm
|
||||
self.setup_plugins()
|
||||
|
||||
def setup_plugins(
|
||||
self, load_help_plugin=True,
|
||||
plugin_list=[],
|
||||
):
|
||||
def setup_plugins(self, load_help_plugin=True):
|
||||
"""Create the plugin manager and register plugins."""
|
||||
|
||||
LOG.info("Loading APRSD Plugins")
|
||||
@ -484,13 +457,9 @@ class PluginManager:
|
||||
_help = HelpPlugin()
|
||||
self._pluggy_pm.register(_help)
|
||||
|
||||
# if plugins_list is passed in, only load
|
||||
# those plugins.
|
||||
if plugin_list:
|
||||
for plugin_name in plugin_list:
|
||||
self._load_plugin(plugin_name)
|
||||
elif CONF.enabled_plugins:
|
||||
for p_name in CONF.enabled_plugins:
|
||||
enabled_plugins = CONF.enabled_plugins
|
||||
if enabled_plugins:
|
||||
for p_name in enabled_plugins:
|
||||
self._load_plugin(p_name)
|
||||
else:
|
||||
# Enabled plugins isn't set, so we default to loading all of
|
||||
@ -500,12 +469,12 @@ class PluginManager:
|
||||
|
||||
LOG.info("Completed Plugin Loading.")
|
||||
|
||||
def run(self, packet: packets.MessagePacket):
|
||||
def run(self, packet: packets.core.MessagePacket):
|
||||
"""Execute all the plugins run method."""
|
||||
with self.lock:
|
||||
return self._pluggy_pm.hook.filter(packet=packet)
|
||||
|
||||
def run_watchlist(self, packet: packets.Packet):
|
||||
def run_watchlist(self, packet: packets.core.Packet):
|
||||
with self.lock:
|
||||
return self._watchlist_pm.hook.filter(packet=packet)
|
||||
|
||||
|
@ -11,8 +11,7 @@ import time
|
||||
import imapclient
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd import packets, plugin, threads, utils
|
||||
from aprsd.stats import collector
|
||||
from aprsd import packets, plugin, stats, threads
|
||||
from aprsd.threads import tx
|
||||
from aprsd.utils import trace
|
||||
|
||||
@ -61,38 +60,6 @@ class EmailInfo:
|
||||
self._delay = val
|
||||
|
||||
|
||||
@utils.singleton
|
||||
class EmailStats:
|
||||
"""Singleton object to store stats related to email."""
|
||||
_instance = None
|
||||
tx = 0
|
||||
rx = 0
|
||||
email_thread_last_time = None
|
||||
|
||||
def stats(self, serializable=False):
|
||||
if CONF.email_plugin.enabled:
|
||||
last_check_time = self.email_thread_last_time
|
||||
if serializable and last_check_time:
|
||||
last_check_time = last_check_time.isoformat()
|
||||
stats = {
|
||||
"tx": self.tx,
|
||||
"rx": self.rx,
|
||||
"last_check_time": last_check_time,
|
||||
}
|
||||
else:
|
||||
stats = {}
|
||||
return stats
|
||||
|
||||
def tx_inc(self):
|
||||
self.tx += 1
|
||||
|
||||
def rx_inc(self):
|
||||
self.rx += 1
|
||||
|
||||
def email_thread_update(self):
|
||||
self.email_thread_last_time = datetime.datetime.now()
|
||||
|
||||
|
||||
class EmailPlugin(plugin.APRSDRegexCommandPluginBase):
|
||||
"""Email Plugin."""
|
||||
|
||||
@ -127,11 +94,6 @@ class EmailPlugin(plugin.APRSDRegexCommandPluginBase):
|
||||
|
||||
shortcuts = _build_shortcuts_dict()
|
||||
LOG.info(f"Email shortcuts {shortcuts}")
|
||||
|
||||
# Register the EmailStats producer with the stats collector
|
||||
# We do this here to prevent EmailStats from being registered
|
||||
# when email is not enabled in the config file.
|
||||
collector.Collector().register_producer(EmailStats)
|
||||
else:
|
||||
LOG.info("Email services not enabled.")
|
||||
self.enabled = False
|
||||
@ -228,6 +190,10 @@ class EmailPlugin(plugin.APRSDRegexCommandPluginBase):
|
||||
def _imap_connect():
|
||||
imap_port = CONF.email_plugin.imap_port
|
||||
use_ssl = CONF.email_plugin.imap_use_ssl
|
||||
# host = CONFIG["aprsd"]["email"]["imap"]["host"]
|
||||
# msg = "{}{}:{}".format("TLS " if use_ssl else "", host, imap_port)
|
||||
# LOG.debug("Connect to IMAP host {} with user '{}'".
|
||||
# format(msg, CONFIG['imap']['login']))
|
||||
|
||||
try:
|
||||
server = imapclient.IMAPClient(
|
||||
@ -474,7 +440,7 @@ def send_email(to_addr, content):
|
||||
[to_addr],
|
||||
msg.as_string(),
|
||||
)
|
||||
EmailStats().tx_inc()
|
||||
stats.APRSDStats().email_tx_inc()
|
||||
except Exception:
|
||||
LOG.exception("Sendmail Error!!!!")
|
||||
server.quit()
|
||||
@ -579,7 +545,7 @@ class APRSDEmailThread(threads.APRSDThread):
|
||||
|
||||
def loop(self):
|
||||
time.sleep(5)
|
||||
EmailStats().email_thread_update()
|
||||
stats.APRSDStats().email_thread_update()
|
||||
# always sleep for 5 seconds and see if we need to check email
|
||||
# This allows CTRL-C to stop the execution of this loop sooner
|
||||
# than check_email_delay time
|
||||
|
@ -8,7 +8,7 @@ from aprsd.utils import trace
|
||||
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
DEFAULT_FORTUNE_PATH = "/usr/games/fortune"
|
||||
DEFAULT_FORTUNE_PATH = '/usr/games/fortune'
|
||||
|
||||
|
||||
class FortunePlugin(plugin.APRSDRegexCommandPluginBase):
|
||||
@ -45,7 +45,7 @@ class FortunePlugin(plugin.APRSDRegexCommandPluginBase):
|
||||
command,
|
||||
shell=True,
|
||||
timeout=3,
|
||||
text=True,
|
||||
universal_newlines=True,
|
||||
)
|
||||
output = (
|
||||
output.replace("\r", "")
|
||||
|
@ -2,10 +2,8 @@ import logging
|
||||
import re
|
||||
import time
|
||||
|
||||
from geopy.geocoders import (
|
||||
ArcGIS, AzureMaps, Baidu, Bing, GoogleV3, HereV7, Nominatim, OpenCage,
|
||||
TomTom, What3WordsV3, Woosmap,
|
||||
)
|
||||
from geopy.geocoders import ArcGIS, AzureMaps, Baidu, Bing, GoogleV3
|
||||
from geopy.geocoders import HereV7, Nominatim, OpenCage, TomTom, What3WordsV3, Woosmap
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd import packets, plugin, plugin_utils
|
||||
@ -41,8 +39,8 @@ class USGov:
|
||||
result = plugin_utils.get_weather_gov_for_gps(lat, lon)
|
||||
# LOG.info(f"WEATHER: {result}")
|
||||
# LOG.info(f"area description {result['location']['areaDescription']}")
|
||||
if "location" in result:
|
||||
loc = UsLocation(result["location"]["areaDescription"])
|
||||
if 'location' in result:
|
||||
loc = UsLocation(result['location']['areaDescription'])
|
||||
else:
|
||||
loc = UsLocation("Unknown Location")
|
||||
|
||||
|
81
aprsd/plugins/query.py
Normal file
81
aprsd/plugins/query.py
Normal file
@ -0,0 +1,81 @@
|
||||
import datetime
|
||||
import logging
|
||||
import re
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd import packets, plugin
|
||||
from aprsd.packets import tracker
|
||||
from aprsd.utils import trace
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
class QueryPlugin(plugin.APRSDRegexCommandPluginBase):
|
||||
"""Query command."""
|
||||
|
||||
command_regex = r"^\!.*"
|
||||
command_name = "query"
|
||||
short_description = "APRSD Owner command to query messages in the MsgTrack"
|
||||
|
||||
def setup(self):
|
||||
"""Do any plugin setup here."""
|
||||
if not CONF.query_plugin.callsign:
|
||||
LOG.error("Config query_plugin.callsign not set. Disabling plugin")
|
||||
self.enabled = False
|
||||
self.enabled = True
|
||||
|
||||
@trace.trace
|
||||
def process(self, packet: packets.MessagePacket):
|
||||
LOG.info("Query COMMAND")
|
||||
|
||||
fromcall = packet.from_call
|
||||
message = packet.get("message_text", None)
|
||||
|
||||
pkt_tracker = tracker.PacketTrack()
|
||||
now = datetime.datetime.now()
|
||||
reply = "Pending messages ({}) {}".format(
|
||||
len(pkt_tracker),
|
||||
now.strftime("%H:%M:%S"),
|
||||
)
|
||||
|
||||
searchstring = "^" + CONF.query_plugin.callsign + ".*"
|
||||
# only I can do admin commands
|
||||
if re.search(searchstring, fromcall):
|
||||
|
||||
# resend last N most recent: "!3"
|
||||
r = re.search(r"^\!([0-9]).*", message)
|
||||
if r is not None:
|
||||
if len(pkt_tracker) > 0:
|
||||
last_n = r.group(1)
|
||||
reply = packets.NULL_MESSAGE
|
||||
LOG.debug(reply)
|
||||
pkt_tracker.restart_delayed(count=int(last_n))
|
||||
else:
|
||||
reply = "No pending msgs to resend"
|
||||
LOG.debug(reply)
|
||||
return reply
|
||||
|
||||
# resend all: "!a"
|
||||
r = re.search(r"^\![aA].*", message)
|
||||
if r is not None:
|
||||
if len(pkt_tracker) > 0:
|
||||
reply = packets.NULL_MESSAGE
|
||||
LOG.debug(reply)
|
||||
pkt_tracker.restart_delayed()
|
||||
else:
|
||||
reply = "No pending msgs"
|
||||
LOG.debug(reply)
|
||||
return reply
|
||||
|
||||
# delete all: "!d"
|
||||
r = re.search(r"^\![dD].*", message)
|
||||
if r is not None:
|
||||
reply = "Deleted ALL pending msgs."
|
||||
LOG.debug(reply)
|
||||
pkt_tracker.flush()
|
||||
return reply
|
||||
|
||||
return reply
|
@ -1,9 +1,9 @@
|
||||
import logging
|
||||
import re
|
||||
import time
|
||||
|
||||
from oslo_config import cfg
|
||||
import pytz
|
||||
from tzlocal import get_localzone
|
||||
|
||||
from aprsd import packets, plugin, plugin_utils
|
||||
from aprsd.utils import fuzzy, trace
|
||||
@ -22,8 +22,7 @@ class TimePlugin(plugin.APRSDRegexCommandPluginBase):
|
||||
short_description = "What is the current local time."
|
||||
|
||||
def _get_local_tz(self):
|
||||
lz = get_localzone()
|
||||
return pytz.timezone(str(lz))
|
||||
return pytz.timezone(time.strftime("%Z"))
|
||||
|
||||
def _get_utcnow(self):
|
||||
return pytz.datetime.datetime.utcnow()
|
||||
|
@ -1,8 +1,7 @@
|
||||
import logging
|
||||
|
||||
import aprsd
|
||||
from aprsd import plugin
|
||||
from aprsd.stats import collector
|
||||
from aprsd import plugin, stats
|
||||
|
||||
|
||||
LOG = logging.getLogger("APRSD")
|
||||
@ -24,8 +23,10 @@ class VersionPlugin(plugin.APRSDRegexCommandPluginBase):
|
||||
# fromcall = packet.get("from")
|
||||
# message = packet.get("message_text", None)
|
||||
# ack = packet.get("msgNo", "0")
|
||||
s = collector.Collector().collect()
|
||||
stats_obj = stats.APRSDStats()
|
||||
s = stats_obj.stats()
|
||||
print(s)
|
||||
return "APRSD ver:{} uptime:{}".format(
|
||||
aprsd.__version__,
|
||||
s["APRSDStats"]["uptime"],
|
||||
s["aprsd"]["uptime"],
|
||||
)
|
||||
|
@ -110,6 +110,7 @@ class USMetarPlugin(plugin.APRSDRegexCommandPluginBase, plugin.APRSFIKEYMixin):
|
||||
|
||||
@trace.trace
|
||||
def process(self, packet):
|
||||
print("FISTY")
|
||||
fromcall = packet.get("from")
|
||||
message = packet.get("message_text", None)
|
||||
# ack = packet.get("msgNo", "0")
|
||||
|
14
aprsd/rpc/__init__.py
Normal file
14
aprsd/rpc/__init__.py
Normal file
@ -0,0 +1,14 @@
|
||||
import rpyc
|
||||
|
||||
|
||||
class AuthSocketStream(rpyc.SocketStream):
|
||||
"""Used to authenitcate the RPC stream to remote."""
|
||||
|
||||
@classmethod
|
||||
def connect(cls, *args, authorizer=None, **kwargs):
|
||||
stream_obj = super().connect(*args, **kwargs)
|
||||
|
||||
if callable(authorizer):
|
||||
authorizer(stream_obj.sock)
|
||||
|
||||
return stream_obj
|
165
aprsd/rpc/client.py
Normal file
165
aprsd/rpc/client.py
Normal file
@ -0,0 +1,165 @@
|
||||
import json
|
||||
import logging
|
||||
|
||||
from oslo_config import cfg
|
||||
import rpyc
|
||||
|
||||
from aprsd import conf # noqa
|
||||
from aprsd import rpc
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
class RPCClient:
|
||||
_instance = None
|
||||
_rpc_client = None
|
||||
|
||||
ip = None
|
||||
port = None
|
||||
magic_word = None
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
return cls._instance
|
||||
|
||||
def __init__(self, ip=None, port=None, magic_word=None):
|
||||
if ip:
|
||||
self.ip = ip
|
||||
else:
|
||||
self.ip = CONF.rpc_settings.ip
|
||||
if port:
|
||||
self.port = int(port)
|
||||
else:
|
||||
self.port = CONF.rpc_settings.port
|
||||
if magic_word:
|
||||
self.magic_word = magic_word
|
||||
else:
|
||||
self.magic_word = CONF.rpc_settings.magic_word
|
||||
self._check_settings()
|
||||
self.get_rpc_client()
|
||||
|
||||
def _check_settings(self):
|
||||
if not CONF.rpc_settings.enabled:
|
||||
LOG.warning("RPC is not enabled, no way to get stats!!")
|
||||
|
||||
if self.magic_word == conf.common.APRSD_DEFAULT_MAGIC_WORD:
|
||||
LOG.warning("You are using the default RPC magic word!!!")
|
||||
LOG.warning("edit aprsd.conf and change rpc_settings.magic_word")
|
||||
|
||||
LOG.debug(f"RPC Client: {self.ip}:{self.port} {self.magic_word}")
|
||||
|
||||
def _rpyc_connect(
|
||||
self, host, port, service=rpyc.VoidService,
|
||||
config={}, ipv6=False,
|
||||
keepalive=False, authorizer=None, ):
|
||||
|
||||
LOG.info(f"Connecting to RPC host '{host}:{port}'")
|
||||
try:
|
||||
s = rpc.AuthSocketStream.connect(
|
||||
host, port, ipv6=ipv6, keepalive=keepalive,
|
||||
authorizer=authorizer,
|
||||
)
|
||||
return rpyc.utils.factory.connect_stream(s, service, config=config)
|
||||
except ConnectionRefusedError:
|
||||
LOG.error(f"Failed to connect to RPC host '{host}:{port}'")
|
||||
return None
|
||||
|
||||
def get_rpc_client(self):
|
||||
if not self._rpc_client:
|
||||
self._rpc_client = self._rpyc_connect(
|
||||
self.ip,
|
||||
self.port,
|
||||
authorizer=lambda sock: sock.send(self.magic_word.encode()),
|
||||
)
|
||||
return self._rpc_client
|
||||
|
||||
def get_stats_dict(self):
|
||||
cl = self.get_rpc_client()
|
||||
result = {}
|
||||
if not cl:
|
||||
return result
|
||||
|
||||
try:
|
||||
rpc_stats_dict = cl.root.get_stats()
|
||||
result = json.loads(rpc_stats_dict)
|
||||
except EOFError:
|
||||
LOG.error("Lost connection to RPC Host")
|
||||
self._rpc_client = None
|
||||
return result
|
||||
|
||||
def get_stats(self):
|
||||
cl = self.get_rpc_client()
|
||||
result = {}
|
||||
if not cl:
|
||||
return result
|
||||
|
||||
try:
|
||||
result = cl.root.get_stats_obj()
|
||||
except EOFError:
|
||||
LOG.error("Lost connection to RPC Host")
|
||||
self._rpc_client = None
|
||||
return result
|
||||
|
||||
def get_packet_track(self):
|
||||
cl = self.get_rpc_client()
|
||||
result = None
|
||||
if not cl:
|
||||
return result
|
||||
try:
|
||||
result = cl.root.get_packet_track()
|
||||
except EOFError:
|
||||
LOG.error("Lost connection to RPC Host")
|
||||
self._rpc_client = None
|
||||
return result
|
||||
|
||||
def get_packet_list(self):
|
||||
cl = self.get_rpc_client()
|
||||
result = None
|
||||
if not cl:
|
||||
return result
|
||||
try:
|
||||
result = cl.root.get_packet_list()
|
||||
except EOFError:
|
||||
LOG.error("Lost connection to RPC Host")
|
||||
self._rpc_client = None
|
||||
return result
|
||||
|
||||
def get_watch_list(self):
|
||||
cl = self.get_rpc_client()
|
||||
result = None
|
||||
if not cl:
|
||||
return result
|
||||
try:
|
||||
result = cl.root.get_watch_list()
|
||||
except EOFError:
|
||||
LOG.error("Lost connection to RPC Host")
|
||||
self._rpc_client = None
|
||||
return result
|
||||
|
||||
def get_seen_list(self):
|
||||
cl = self.get_rpc_client()
|
||||
result = None
|
||||
if not cl:
|
||||
return result
|
||||
try:
|
||||
result = cl.root.get_seen_list()
|
||||
except EOFError:
|
||||
LOG.error("Lost connection to RPC Host")
|
||||
self._rpc_client = None
|
||||
return result
|
||||
|
||||
def get_log_entries(self):
|
||||
cl = self.get_rpc_client()
|
||||
result = None
|
||||
if not cl:
|
||||
return result
|
||||
try:
|
||||
result_str = cl.root.get_log_entries()
|
||||
result = json.loads(result_str)
|
||||
except EOFError:
|
||||
LOG.error("Lost connection to RPC Host")
|
||||
self._rpc_client = None
|
||||
return result
|
99
aprsd/rpc/server.py
Normal file
99
aprsd/rpc/server.py
Normal file
@ -0,0 +1,99 @@
|
||||
import json
|
||||
import logging
|
||||
|
||||
from oslo_config import cfg
|
||||
import rpyc
|
||||
from rpyc.utils.authenticators import AuthenticationError
|
||||
from rpyc.utils.server import ThreadPoolServer
|
||||
|
||||
from aprsd import conf # noqa: F401
|
||||
from aprsd import packets, stats, threads
|
||||
from aprsd.threads import log_monitor
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
def magic_word_authenticator(sock):
|
||||
client_ip = sock.getpeername()[0]
|
||||
magic = sock.recv(len(CONF.rpc_settings.magic_word)).decode()
|
||||
if magic != CONF.rpc_settings.magic_word:
|
||||
LOG.error(
|
||||
f"wrong magic word passed from {client_ip} "
|
||||
"'{magic}' != '{CONF.rpc_settings.magic_word}'",
|
||||
)
|
||||
raise AuthenticationError(
|
||||
f"wrong magic word passed in '{magic}'"
|
||||
f" != '{CONF.rpc_settings.magic_word}'",
|
||||
)
|
||||
return sock, None
|
||||
|
||||
|
||||
class APRSDRPCThread(threads.APRSDThread):
|
||||
def __init__(self):
|
||||
super().__init__(name="RPCThread")
|
||||
self.thread = ThreadPoolServer(
|
||||
APRSDService,
|
||||
port=CONF.rpc_settings.port,
|
||||
protocol_config={"allow_public_attrs": True},
|
||||
authenticator=magic_word_authenticator,
|
||||
)
|
||||
|
||||
def stop(self):
|
||||
if self.thread:
|
||||
self.thread.close()
|
||||
self.thread_stop = True
|
||||
|
||||
def loop(self):
|
||||
# there is no loop as run is blocked
|
||||
if self.thread and not self.thread_stop:
|
||||
# This is a blocking call
|
||||
self.thread.start()
|
||||
|
||||
|
||||
@rpyc.service
|
||||
class APRSDService(rpyc.Service):
|
||||
def on_connect(self, conn):
|
||||
# code that runs when a connection is created
|
||||
# (to init the service, if needed)
|
||||
LOG.info("RPC Client Connected")
|
||||
self._conn = conn
|
||||
|
||||
def on_disconnect(self, conn):
|
||||
# code that runs after the connection has already closed
|
||||
# (to finalize the service, if needed)
|
||||
LOG.info("RPC Client Disconnected")
|
||||
self._conn = None
|
||||
|
||||
@rpyc.exposed
|
||||
def get_stats(self):
|
||||
stat = stats.APRSDStats()
|
||||
stats_dict = stat.stats()
|
||||
return_str = json.dumps(stats_dict, indent=4, sort_keys=True, default=str)
|
||||
return return_str
|
||||
|
||||
@rpyc.exposed
|
||||
def get_stats_obj(self):
|
||||
return stats.APRSDStats()
|
||||
|
||||
@rpyc.exposed
|
||||
def get_packet_list(self):
|
||||
return packets.PacketList()
|
||||
|
||||
@rpyc.exposed
|
||||
def get_packet_track(self):
|
||||
return packets.PacketTrack()
|
||||
|
||||
@rpyc.exposed
|
||||
def get_watch_list(self):
|
||||
return packets.WatchList()
|
||||
|
||||
@rpyc.exposed
|
||||
def get_seen_list(self):
|
||||
return packets.SeenList()
|
||||
|
||||
@rpyc.exposed
|
||||
def get_log_entries(self):
|
||||
entries = log_monitor.LogEntries().get_all_and_purge()
|
||||
return json.dumps(entries, default=str)
|
266
aprsd/stats.py
Normal file
266
aprsd/stats.py
Normal file
@ -0,0 +1,266 @@
|
||||
import datetime
|
||||
import logging
|
||||
import threading
|
||||
|
||||
from oslo_config import cfg
|
||||
import wrapt
|
||||
|
||||
import aprsd
|
||||
from aprsd import packets, plugin, utils
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
class APRSDStats:
|
||||
|
||||
_instance = None
|
||||
lock = threading.Lock()
|
||||
|
||||
start_time = None
|
||||
_aprsis_server = None
|
||||
_aprsis_keepalive = None
|
||||
|
||||
_email_thread_last_time = None
|
||||
_email_tx = 0
|
||||
_email_rx = 0
|
||||
|
||||
_mem_current = 0
|
||||
_mem_peak = 0
|
||||
|
||||
_thread_info = {}
|
||||
|
||||
_pkt_cnt = {
|
||||
"Packet": {
|
||||
"tx": 0,
|
||||
"rx": 0,
|
||||
},
|
||||
"AckPacket": {
|
||||
"tx": 0,
|
||||
"rx": 0,
|
||||
},
|
||||
"GPSPacket": {
|
||||
"tx": 0,
|
||||
"rx": 0,
|
||||
},
|
||||
"StatusPacket": {
|
||||
"tx": 0,
|
||||
"rx": 0,
|
||||
},
|
||||
"MicEPacket": {
|
||||
"tx": 0,
|
||||
"rx": 0,
|
||||
},
|
||||
"MessagePacket": {
|
||||
"tx": 0,
|
||||
"rx": 0,
|
||||
},
|
||||
"WeatherPacket": {
|
||||
"tx": 0,
|
||||
"rx": 0,
|
||||
},
|
||||
"ObjectPacket": {
|
||||
"tx": 0,
|
||||
"rx": 0,
|
||||
},
|
||||
}
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
# any init here
|
||||
cls._instance.start_time = datetime.datetime.now()
|
||||
cls._instance._aprsis_keepalive = datetime.datetime.now()
|
||||
return cls._instance
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
@property
|
||||
def uptime(self):
|
||||
return datetime.datetime.now() - self.start_time
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
@property
|
||||
def memory(self):
|
||||
return self._mem_current
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def set_memory(self, memory):
|
||||
self._mem_current = memory
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
@property
|
||||
def memory_peak(self):
|
||||
return self._mem_peak
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def set_memory_peak(self, memory):
|
||||
self._mem_peak = memory
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def set_thread_info(self, thread_info):
|
||||
self._thread_info = thread_info
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
@property
|
||||
def thread_info(self):
|
||||
return self._thread_info
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
@property
|
||||
def aprsis_server(self):
|
||||
return self._aprsis_server
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def set_aprsis_server(self, server):
|
||||
self._aprsis_server = server
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
@property
|
||||
def aprsis_keepalive(self):
|
||||
return self._aprsis_keepalive
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def set_aprsis_keepalive(self):
|
||||
self._aprsis_keepalive = datetime.datetime.now()
|
||||
|
||||
def rx(self, packet):
|
||||
pkt_type = packet.__class__.__name__
|
||||
if pkt_type not in self._pkt_cnt:
|
||||
self._pkt_cnt[pkt_type] = {
|
||||
"tx": 0,
|
||||
"rx": 0,
|
||||
}
|
||||
self._pkt_cnt[pkt_type]["rx"] += 1
|
||||
|
||||
def tx(self, packet):
|
||||
pkt_type = packet.__class__.__name__
|
||||
if pkt_type not in self._pkt_cnt:
|
||||
self._pkt_cnt[pkt_type] = {
|
||||
"tx": 0,
|
||||
"rx": 0,
|
||||
}
|
||||
self._pkt_cnt[pkt_type]["tx"] += 1
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
@property
|
||||
def msgs_tracked(self):
|
||||
return packets.PacketTrack().total_tracked
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
@property
|
||||
def email_tx(self):
|
||||
return self._email_tx
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def email_tx_inc(self):
|
||||
self._email_tx += 1
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
@property
|
||||
def email_rx(self):
|
||||
return self._email_rx
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def email_rx_inc(self):
|
||||
self._email_rx += 1
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
@property
|
||||
def email_thread_time(self):
|
||||
return self._email_thread_last_time
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def email_thread_update(self):
|
||||
self._email_thread_last_time = datetime.datetime.now()
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def stats(self):
|
||||
now = datetime.datetime.now()
|
||||
if self._email_thread_last_time:
|
||||
last_update = str(now - self._email_thread_last_time)
|
||||
else:
|
||||
last_update = "never"
|
||||
|
||||
if self._aprsis_keepalive:
|
||||
last_aprsis_keepalive = str(now - self._aprsis_keepalive)
|
||||
else:
|
||||
last_aprsis_keepalive = "never"
|
||||
|
||||
pm = plugin.PluginManager()
|
||||
plugins = pm.get_plugins()
|
||||
plugin_stats = {}
|
||||
if plugins:
|
||||
def full_name_with_qualname(obj):
|
||||
return "{}.{}".format(
|
||||
obj.__class__.__module__,
|
||||
obj.__class__.__qualname__,
|
||||
)
|
||||
|
||||
for p in plugins:
|
||||
plugin_stats[full_name_with_qualname(p)] = {
|
||||
"enabled": p.enabled,
|
||||
"rx": p.rx_count,
|
||||
"tx": p.tx_count,
|
||||
"version": p.version,
|
||||
}
|
||||
|
||||
wl = packets.WatchList()
|
||||
sl = packets.SeenList()
|
||||
pl = packets.PacketList()
|
||||
|
||||
stats = {
|
||||
"aprsd": {
|
||||
"version": aprsd.__version__,
|
||||
"uptime": utils.strfdelta(self.uptime),
|
||||
"callsign": CONF.callsign,
|
||||
"memory_current": int(self.memory),
|
||||
"memory_current_str": utils.human_size(self.memory),
|
||||
"memory_peak": int(self.memory_peak),
|
||||
"memory_peak_str": utils.human_size(self.memory_peak),
|
||||
"threads": self._thread_info,
|
||||
"watch_list": wl.get_all(),
|
||||
"seen_list": sl.get_all(),
|
||||
},
|
||||
"aprs-is": {
|
||||
"server": str(self.aprsis_server),
|
||||
"callsign": CONF.aprs_network.login,
|
||||
"last_update": last_aprsis_keepalive,
|
||||
},
|
||||
"packets": {
|
||||
"total_tracked": int(pl.total_tx() + pl.total_rx()),
|
||||
"total_sent": int(pl.total_tx()),
|
||||
"total_received": int(pl.total_rx()),
|
||||
"by_type": self._pkt_cnt,
|
||||
},
|
||||
"messages": {
|
||||
"sent": self._pkt_cnt["MessagePacket"]["tx"],
|
||||
"received": self._pkt_cnt["MessagePacket"]["tx"],
|
||||
"ack_sent": self._pkt_cnt["AckPacket"]["tx"],
|
||||
},
|
||||
"email": {
|
||||
"enabled": CONF.email_plugin.enabled,
|
||||
"sent": int(self._email_tx),
|
||||
"received": int(self._email_rx),
|
||||
"thread_last_update": last_update,
|
||||
},
|
||||
"plugins": plugin_stats,
|
||||
}
|
||||
return stats
|
||||
|
||||
def __str__(self):
|
||||
pl = packets.PacketList()
|
||||
return (
|
||||
"Uptime:{} Msgs TX:{} RX:{} "
|
||||
"ACK: TX:{} RX:{} "
|
||||
"Email TX:{} RX:{} LastLoop:{} ".format(
|
||||
self.uptime,
|
||||
pl.total_tx(),
|
||||
pl.total_rx(),
|
||||
self._pkt_cnt["AckPacket"]["tx"],
|
||||
self._pkt_cnt["AckPacket"]["rx"],
|
||||
self._email_tx,
|
||||
self._email_rx,
|
||||
self._email_thread_last_time,
|
||||
)
|
||||
)
|
@ -1,18 +0,0 @@
|
||||
from aprsd import plugin
|
||||
from aprsd.client import stats as client_stats
|
||||
from aprsd.packets import packet_list, seen_list, tracker, watch_list
|
||||
from aprsd.stats import app, collector
|
||||
from aprsd.threads import aprsd
|
||||
|
||||
|
||||
# Create the collector and register all the objects
|
||||
# that APRSD has that implement the stats protocol
|
||||
stats_collector = collector.Collector()
|
||||
stats_collector.register_producer(app.APRSDStats)
|
||||
stats_collector.register_producer(packet_list.PacketList)
|
||||
stats_collector.register_producer(watch_list.WatchList)
|
||||
stats_collector.register_producer(tracker.PacketTrack)
|
||||
stats_collector.register_producer(plugin.PluginManager)
|
||||
stats_collector.register_producer(aprsd.APRSDThreadList)
|
||||
stats_collector.register_producer(client_stats.APRSClientStats)
|
||||
stats_collector.register_producer(seen_list.SeenList)
|
@ -1,49 +0,0 @@
|
||||
import datetime
|
||||
import tracemalloc
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
import aprsd
|
||||
from aprsd import utils
|
||||
from aprsd.log import log as aprsd_log
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
class APRSDStats:
|
||||
"""The AppStats class is used to collect stats from the application."""
|
||||
|
||||
_instance = None
|
||||
start_time = None
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
"""Have to override the new method to make this a singleton
|
||||
|
||||
instead of using @singletone decorator so the unit tests work.
|
||||
"""
|
||||
if not cls._instance:
|
||||
cls._instance = super().__new__(cls)
|
||||
cls._instance.start_time = datetime.datetime.now()
|
||||
return cls._instance
|
||||
|
||||
def uptime(self):
|
||||
return datetime.datetime.now() - self.start_time
|
||||
|
||||
def stats(self, serializable=False) -> dict:
|
||||
current, peak = tracemalloc.get_traced_memory()
|
||||
uptime = self.uptime()
|
||||
qsize = aprsd_log.logging_queue.qsize()
|
||||
if serializable:
|
||||
uptime = str(uptime)
|
||||
stats = {
|
||||
"version": aprsd.__version__,
|
||||
"uptime": uptime,
|
||||
"callsign": CONF.callsign,
|
||||
"memory_current": int(current),
|
||||
"memory_current_str": utils.human_size(current),
|
||||
"memory_peak": int(peak),
|
||||
"memory_peak_str": utils.human_size(peak),
|
||||
"loging_queue": qsize,
|
||||
}
|
||||
return stats
|
@ -1,42 +0,0 @@
|
||||
import logging
|
||||
from typing import Callable, Protocol, runtime_checkable
|
||||
|
||||
from aprsd.utils import singleton
|
||||
|
||||
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class StatsProducer(Protocol):
|
||||
"""The StatsProducer protocol is used to define the interface for collecting stats."""
|
||||
def stats(self, serializable=False) -> dict:
|
||||
"""provide stats in a dictionary format."""
|
||||
...
|
||||
|
||||
|
||||
@singleton
|
||||
class Collector:
|
||||
"""The Collector class is used to collect stats from multiple StatsProducer instances."""
|
||||
def __init__(self):
|
||||
self.producers: list[Callable] = []
|
||||
|
||||
def collect(self, serializable=False) -> dict:
|
||||
stats = {}
|
||||
for name in self.producers:
|
||||
cls = name()
|
||||
try:
|
||||
stats[cls.__class__.__name__] = cls.stats(serializable=serializable).copy()
|
||||
except Exception as e:
|
||||
LOG.error(f"Error in producer {name} (stats): {e}")
|
||||
return stats
|
||||
|
||||
def register_producer(self, producer_name: Callable):
|
||||
if not isinstance(producer_name, StatsProducer):
|
||||
raise TypeError(f"Producer {producer_name} is not a StatsProducer")
|
||||
self.producers.append(producer_name)
|
||||
|
||||
def unregister_producer(self, producer_name: Callable):
|
||||
if not isinstance(producer_name, StatsProducer):
|
||||
raise TypeError(f"Producer {producer_name} is not a StatsProducer")
|
||||
self.producers.remove(producer_name)
|
@ -3,9 +3,8 @@ import queue
|
||||
# Make these available to anyone importing
|
||||
# aprsd.threads
|
||||
from .aprsd import APRSDThread, APRSDThreadList # noqa: F401
|
||||
from .rx import ( # noqa: F401
|
||||
APRSDDupeRXThread, APRSDProcessPacketThread, APRSDRXThread,
|
||||
)
|
||||
from .keep_alive import KeepAliveThread # noqa: F401
|
||||
from .rx import APRSDRXThread, APRSDDupeRXThread, APRSDProcessPacketThread # noqa: F401
|
||||
|
||||
|
||||
packet_queue = queue.Queue(maxsize=20)
|
||||
|
@ -2,7 +2,6 @@ import abc
|
||||
import datetime
|
||||
import logging
|
||||
import threading
|
||||
from typing import List
|
||||
|
||||
import wrapt
|
||||
|
||||
@ -10,10 +9,43 @@ import wrapt
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
class APRSDThread(threading.Thread, metaclass=abc.ABCMeta):
|
||||
"""Base class for all threads in APRSD."""
|
||||
class APRSDThreadList:
|
||||
"""Singleton class that keeps track of application wide threads."""
|
||||
|
||||
loop_count = 1
|
||||
_instance = None
|
||||
|
||||
threads_list = []
|
||||
lock = threading.Lock()
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
cls.threads_list = []
|
||||
return cls._instance
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def add(self, thread_obj):
|
||||
self.threads_list.append(thread_obj)
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def remove(self, thread_obj):
|
||||
self.threads_list.remove(thread_obj)
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def stop_all(self):
|
||||
"""Iterate over all threads and call stop on them."""
|
||||
for th in self.threads_list:
|
||||
LOG.info(f"Stopping Thread {th.name}")
|
||||
if hasattr(th, "packet"):
|
||||
LOG.info(F"{th.name} packet {th.packet}")
|
||||
th.stop()
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def __len__(self):
|
||||
return len(self.threads_list)
|
||||
|
||||
|
||||
class APRSDThread(threading.Thread, metaclass=abc.ABCMeta):
|
||||
|
||||
def __init__(self, name):
|
||||
super().__init__(name=name)
|
||||
@ -47,7 +79,6 @@ class APRSDThread(threading.Thread, metaclass=abc.ABCMeta):
|
||||
def run(self):
|
||||
LOG.debug("Starting")
|
||||
while not self._should_quit():
|
||||
self.loop_count += 1
|
||||
can_loop = self.loop()
|
||||
self._last_loop = datetime.datetime.now()
|
||||
if not can_loop:
|
||||
@ -55,65 +86,3 @@ class APRSDThread(threading.Thread, metaclass=abc.ABCMeta):
|
||||
self._cleanup()
|
||||
APRSDThreadList().remove(self)
|
||||
LOG.debug("Exiting")
|
||||
|
||||
|
||||
class APRSDThreadList:
|
||||
"""Singleton class that keeps track of application wide threads."""
|
||||
|
||||
_instance = None
|
||||
|
||||
threads_list: List[APRSDThread] = []
|
||||
lock = threading.Lock()
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
cls.threads_list = []
|
||||
return cls._instance
|
||||
|
||||
def stats(self, serializable=False) -> dict:
|
||||
stats = {}
|
||||
for th in self.threads_list:
|
||||
age = th.loop_age()
|
||||
if serializable:
|
||||
age = str(age)
|
||||
stats[th.name] = {
|
||||
"name": th.name,
|
||||
"class": th.__class__.__name__,
|
||||
"alive": th.is_alive(),
|
||||
"age": th.loop_age(),
|
||||
"loop_count": th.loop_count,
|
||||
}
|
||||
return stats
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def add(self, thread_obj):
|
||||
self.threads_list.append(thread_obj)
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def remove(self, thread_obj):
|
||||
self.threads_list.remove(thread_obj)
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def stop_all(self):
|
||||
"""Iterate over all threads and call stop on them."""
|
||||
for th in self.threads_list:
|
||||
LOG.info(f"Stopping Thread {th.name}")
|
||||
if hasattr(th, "packet"):
|
||||
LOG.info(F"{th.name} packet {th.packet}")
|
||||
th.stop()
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def info(self):
|
||||
"""Go through all the threads and collect info about each."""
|
||||
info = {}
|
||||
for thread in self.threads_list:
|
||||
alive = thread.is_alive()
|
||||
age = thread.loop_age()
|
||||
key = thread.__class__.__name__
|
||||
info[key] = {"alive": True if alive else False, "age": age, "name": thread.name}
|
||||
return info
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def __len__(self):
|
||||
return len(self.threads_list)
|
||||
|
@ -3,19 +3,14 @@ import logging
|
||||
import time
|
||||
import tracemalloc
|
||||
|
||||
from loguru import logger
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd import packets, utils
|
||||
from aprsd.client import client_factory
|
||||
from aprsd.log import log as aprsd_log
|
||||
from aprsd.stats import collector
|
||||
from aprsd import client, packets, stats, utils
|
||||
from aprsd.threads import APRSDThread, APRSDThreadList
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
LOGU = logger
|
||||
|
||||
|
||||
class KeepAliveThread(APRSDThread):
|
||||
@ -29,75 +24,64 @@ class KeepAliveThread(APRSDThread):
|
||||
self.max_delta = datetime.timedelta(**max_timeout)
|
||||
|
||||
def loop(self):
|
||||
if self.loop_count % 60 == 0:
|
||||
stats_json = collector.Collector().collect()
|
||||
if self.cntr % 60 == 0:
|
||||
pkt_tracker = packets.PacketTrack()
|
||||
stats_obj = stats.APRSDStats()
|
||||
pl = packets.PacketList()
|
||||
thread_list = APRSDThreadList()
|
||||
now = datetime.datetime.now()
|
||||
|
||||
if "EmailStats" in stats_json:
|
||||
email_stats = stats_json["EmailStats"]
|
||||
if email_stats.get("last_check_time"):
|
||||
email_thread_time = utils.strfdelta(now - email_stats["last_check_time"])
|
||||
else:
|
||||
email_thread_time = "N/A"
|
||||
last_email = stats_obj.email_thread_time
|
||||
if last_email:
|
||||
email_thread_time = utils.strfdelta(now - last_email)
|
||||
else:
|
||||
email_thread_time = "N/A"
|
||||
|
||||
if "APRSClientStats" in stats_json and stats_json["APRSClientStats"].get("transport") == "aprsis":
|
||||
if stats_json["APRSClientStats"].get("server_keepalive"):
|
||||
last_msg_time = utils.strfdelta(now - stats_json["APRSClientStats"]["server_keepalive"])
|
||||
else:
|
||||
last_msg_time = "N/A"
|
||||
else:
|
||||
last_msg_time = "N/A"
|
||||
last_msg_time = utils.strfdelta(now - stats_obj.aprsis_keepalive)
|
||||
|
||||
tracked_packets = stats_json["PacketTrack"]["total_tracked"]
|
||||
tx_msg = 0
|
||||
rx_msg = 0
|
||||
if "PacketList" in stats_json:
|
||||
msg_packets = stats_json["PacketList"].get("MessagePacket")
|
||||
if msg_packets:
|
||||
tx_msg = msg_packets.get("tx", 0)
|
||||
rx_msg = msg_packets.get("rx", 0)
|
||||
current, peak = tracemalloc.get_traced_memory()
|
||||
stats_obj.set_memory(current)
|
||||
stats_obj.set_memory_peak(peak)
|
||||
|
||||
login = CONF.callsign
|
||||
|
||||
tracked_packets = len(pkt_tracker)
|
||||
|
||||
keepalive = (
|
||||
"{} - Uptime {} RX:{} TX:{} Tracker:{} Msgs TX:{} RX:{} "
|
||||
"Last:{} Email: {} - RAM Current:{} Peak:{} Threads:{} LoggingQueue:{}"
|
||||
"Last:{} Email: {} - RAM Current:{} Peak:{} Threads:{}"
|
||||
).format(
|
||||
stats_json["APRSDStats"]["callsign"],
|
||||
stats_json["APRSDStats"]["uptime"],
|
||||
login,
|
||||
utils.strfdelta(stats_obj.uptime),
|
||||
pl.total_rx(),
|
||||
pl.total_tx(),
|
||||
tracked_packets,
|
||||
tx_msg,
|
||||
rx_msg,
|
||||
stats_obj._pkt_cnt["MessagePacket"]["tx"],
|
||||
stats_obj._pkt_cnt["MessagePacket"]["rx"],
|
||||
last_msg_time,
|
||||
email_thread_time,
|
||||
stats_json["APRSDStats"]["memory_current_str"],
|
||||
stats_json["APRSDStats"]["memory_peak_str"],
|
||||
utils.human_size(current),
|
||||
utils.human_size(peak),
|
||||
len(thread_list),
|
||||
aprsd_log.logging_queue.qsize(),
|
||||
)
|
||||
LOG.info(keepalive)
|
||||
if "APRSDThreadList" in stats_json:
|
||||
thread_list = stats_json["APRSDThreadList"]
|
||||
for thread_name in thread_list:
|
||||
thread = thread_list[thread_name]
|
||||
alive = thread["alive"]
|
||||
age = thread["age"]
|
||||
key = thread["name"]
|
||||
if not alive:
|
||||
LOG.error(f"Thread {thread}")
|
||||
|
||||
thread_hex = f"fg {utils.hex_from_name(key)}"
|
||||
t_name = f"<{thread_hex}>{key:<15}</{thread_hex}>"
|
||||
thread_msg = f"{t_name} Alive? {str(alive): <5} {str(age): <20}"
|
||||
LOGU.opt(colors=True).info(thread_msg)
|
||||
# LOG.info(f"{key: <15} Alive? {str(alive): <5} {str(age): <20}")
|
||||
thread_out = []
|
||||
thread_info = {}
|
||||
for thread in thread_list.threads_list:
|
||||
alive = thread.is_alive()
|
||||
age = thread.loop_age()
|
||||
key = thread.__class__.__name__
|
||||
thread_out.append(f"{key}:{alive}:{age}")
|
||||
if key not in thread_info:
|
||||
thread_info[key] = {}
|
||||
thread_info[key]["alive"] = alive
|
||||
thread_info[key]["age"] = age
|
||||
if not alive:
|
||||
LOG.error(f"Thread {thread}")
|
||||
LOG.info(",".join(thread_out))
|
||||
stats_obj.set_thread_info(thread_info)
|
||||
|
||||
# check the APRS connection
|
||||
cl = client_factory.create()
|
||||
cl = client.factory.create()
|
||||
# Reset the connection if it's dead and this isn't our
|
||||
# First time through the loop.
|
||||
# The first time through the loop can happen at startup where
|
||||
@ -105,19 +89,19 @@ class KeepAliveThread(APRSDThread):
|
||||
# to make it's connection the first time.
|
||||
if not cl.is_alive() and self.cntr > 0:
|
||||
LOG.error(f"{cl.__class__.__name__} is not alive!!! Resetting")
|
||||
client_factory.create().reset()
|
||||
# else:
|
||||
# # See if we should reset the aprs-is client
|
||||
# # Due to losing a keepalive from them
|
||||
# delta_dict = utils.parse_delta_str(last_msg_time)
|
||||
# delta = datetime.timedelta(**delta_dict)
|
||||
#
|
||||
# if delta > self.max_delta:
|
||||
# # We haven't gotten a keepalive from aprs-is in a while
|
||||
# # reset the connection.a
|
||||
# if not client.KISSClient.is_enabled():
|
||||
# LOG.warning(f"Resetting connection to APRS-IS {delta}")
|
||||
# client.factory.create().reset()
|
||||
client.factory.create().reset()
|
||||
else:
|
||||
# See if we should reset the aprs-is client
|
||||
# Due to losing a keepalive from them
|
||||
delta_dict = utils.parse_delta_str(last_msg_time)
|
||||
delta = datetime.timedelta(**delta_dict)
|
||||
|
||||
if delta > self.max_delta:
|
||||
# We haven't gotten a keepalive from aprs-is in a while
|
||||
# reset the connection.a
|
||||
if not client.KISSClient.is_enabled():
|
||||
LOG.warning(f"Resetting connection to APRS-IS {delta}")
|
||||
client.factory.create().reset()
|
||||
|
||||
# Check version every day
|
||||
delta = now - self.checker_time
|
||||
@ -126,6 +110,6 @@ class KeepAliveThread(APRSDThread):
|
||||
level, msg = utils._check_version()
|
||||
if level:
|
||||
LOG.warning(msg)
|
||||
self.cntr += 1
|
||||
self.cntr += 1
|
||||
time.sleep(1)
|
||||
return True
|
||||
|
@ -1,54 +1,25 @@
|
||||
import datetime
|
||||
import logging
|
||||
import threading
|
||||
|
||||
from oslo_config import cfg
|
||||
import requests
|
||||
import wrapt
|
||||
|
||||
from aprsd import threads
|
||||
from aprsd.log import log
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
def send_log_entries(force=False):
|
||||
"""Send all of the log entries to the web interface."""
|
||||
if CONF.admin.web_enabled:
|
||||
if force or LogEntries().is_purge_ready():
|
||||
entries = LogEntries().get_all_and_purge()
|
||||
if entries:
|
||||
try:
|
||||
requests.post(
|
||||
f"http://{CONF.admin.web_ip}:{CONF.admin.web_port}/log_entries",
|
||||
json=entries,
|
||||
auth=(CONF.admin.user, CONF.admin.password),
|
||||
)
|
||||
except Exception:
|
||||
LOG.warning(f"Failed to send log entries. len={len(entries)}")
|
||||
|
||||
|
||||
class LogEntries:
|
||||
entries = []
|
||||
lock = threading.Lock()
|
||||
_instance = None
|
||||
last_purge = datetime.datetime.now()
|
||||
max_delta = datetime.timedelta(
|
||||
hours=0.0, minutes=0, seconds=2,
|
||||
)
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
return cls._instance
|
||||
|
||||
def stats(self) -> dict:
|
||||
return {
|
||||
"log_entries": self.entries,
|
||||
}
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def add(self, entry):
|
||||
self.entries.append(entry)
|
||||
@ -57,18 +28,8 @@ class LogEntries:
|
||||
def get_all_and_purge(self):
|
||||
entries = self.entries.copy()
|
||||
self.entries = []
|
||||
self.last_purge = datetime.datetime.now()
|
||||
return entries
|
||||
|
||||
def is_purge_ready(self):
|
||||
now = datetime.datetime.now()
|
||||
if (
|
||||
now - self.last_purge > self.max_delta
|
||||
and len(self.entries) > 1
|
||||
):
|
||||
return True
|
||||
return False
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def __len__(self):
|
||||
return len(self.entries)
|
||||
@ -79,10 +40,6 @@ class LogMonitorThread(threads.APRSDThread):
|
||||
def __init__(self):
|
||||
super().__init__("LogMonitorThread")
|
||||
|
||||
def stop(self):
|
||||
send_log_entries(force=True)
|
||||
super().stop()
|
||||
|
||||
def loop(self):
|
||||
try:
|
||||
record = log.logging_queue.get(block=True, timeout=2)
|
||||
@ -97,7 +54,6 @@ class LogMonitorThread(threads.APRSDThread):
|
||||
# Just ignore thi
|
||||
pass
|
||||
|
||||
send_log_entries()
|
||||
return True
|
||||
|
||||
def json_record(self, record):
|
||||
|
@ -6,10 +6,7 @@ import time
|
||||
import aprslib
|
||||
from oslo_config import cfg
|
||||
|
||||
from aprsd import packets, plugin
|
||||
from aprsd.client import client_factory
|
||||
from aprsd.packets import collector
|
||||
from aprsd.packets import log as packet_log
|
||||
from aprsd import client, packets, plugin
|
||||
from aprsd.threads import APRSDThread, tx
|
||||
|
||||
|
||||
@ -19,20 +16,15 @@ LOG = logging.getLogger("APRSD")
|
||||
|
||||
class APRSDRXThread(APRSDThread):
|
||||
def __init__(self, packet_queue):
|
||||
super().__init__("RX_PKT")
|
||||
super().__init__("RX_MSG")
|
||||
self.packet_queue = packet_queue
|
||||
self._client = client_factory.create()
|
||||
self._client = client.factory.create()
|
||||
|
||||
def stop(self):
|
||||
self.thread_stop = True
|
||||
if self._client:
|
||||
self._client.stop()
|
||||
client.factory.create().client.stop()
|
||||
|
||||
def loop(self):
|
||||
if not self._client:
|
||||
self._client = client_factory.create()
|
||||
time.sleep(1)
|
||||
return True
|
||||
# setup the consumer of messages and block until a messages
|
||||
try:
|
||||
# This will register a packet consumer with aprslib
|
||||
@ -44,32 +36,23 @@ class APRSDRXThread(APRSDThread):
|
||||
# and the aprslib developer didn't want to allow a PR to add
|
||||
# kwargs. :(
|
||||
# https://github.com/rossengeorgiev/aprs-python/pull/56
|
||||
self._client.consumer(
|
||||
self._process_packet, raw=False, blocking=False,
|
||||
self._client.client.consumer(
|
||||
self.process_packet, raw=False, blocking=False,
|
||||
)
|
||||
|
||||
except (
|
||||
aprslib.exceptions.ConnectionDrop,
|
||||
aprslib.exceptions.ConnectionError,
|
||||
):
|
||||
LOG.error("Connection dropped, reconnecting")
|
||||
time.sleep(5)
|
||||
# Force the deletion of the client object connected to aprs
|
||||
# This will cause a reconnect, next time client.get_client()
|
||||
# is called
|
||||
self._client.reset()
|
||||
time.sleep(5)
|
||||
except Exception:
|
||||
# LOG.exception(ex)
|
||||
LOG.error("Resetting connection and trying again.")
|
||||
self._client.reset()
|
||||
time.sleep(5)
|
||||
# Continue to loop
|
||||
return True
|
||||
|
||||
def _process_packet(self, *args, **kwargs):
|
||||
"""Intermediate callback so we can update the keepalive time."""
|
||||
# Now call the 'real' packet processing for a RX'x packet
|
||||
self.process_packet(*args, **kwargs)
|
||||
|
||||
@abc.abstractmethod
|
||||
def process_packet(self, *args, **kwargs):
|
||||
pass
|
||||
@ -97,8 +80,7 @@ class APRSDDupeRXThread(APRSDRXThread):
|
||||
"""
|
||||
packet = self._client.decode_packet(*args, **kwargs)
|
||||
# LOG.debug(raw)
|
||||
packet_log.log(packet)
|
||||
pkt_list = packets.PacketList()
|
||||
packet.log(header="RX")
|
||||
|
||||
if isinstance(packet, packets.AckPacket):
|
||||
# We don't need to drop AckPackets, those should be
|
||||
@ -109,6 +91,7 @@ class APRSDDupeRXThread(APRSDRXThread):
|
||||
# For RF based APRS Clients we can get duplicate packets
|
||||
# So we need to track them and not process the dupes.
|
||||
found = False
|
||||
pkt_list = packets.PacketList()
|
||||
try:
|
||||
# Find the packet in the list of already seen packets
|
||||
# Based on the packet.key
|
||||
@ -117,11 +100,14 @@ class APRSDDupeRXThread(APRSDRXThread):
|
||||
found = False
|
||||
|
||||
if not found:
|
||||
# We haven't seen this packet before, so we process it.
|
||||
collector.PacketCollector().rx(packet)
|
||||
# If we are in the process of already ack'ing
|
||||
# a packet, we should drop the packet
|
||||
# because it's a dupe within the time that
|
||||
# we send the 3 acks for the packet.
|
||||
pkt_list.rx(packet)
|
||||
self.packet_queue.put(packet)
|
||||
elif packet.timestamp - found.timestamp < CONF.packet_dupe_timeout:
|
||||
# If the packet came in within N seconds of the
|
||||
# If the packet came in within 60 seconds of the
|
||||
# Last time seeing the packet, then we drop it as a dupe.
|
||||
LOG.warning(f"Packet {packet.from_call}:{packet.msgNo} already tracked, dropping.")
|
||||
else:
|
||||
@ -129,7 +115,7 @@ class APRSDDupeRXThread(APRSDRXThread):
|
||||
f"Packet {packet.from_call}:{packet.msgNo} already tracked "
|
||||
f"but older than {CONF.packet_dupe_timeout} seconds. processing.",
|
||||
)
|
||||
collector.PacketCollector().rx(packet)
|
||||
pkt_list.rx(packet)
|
||||
self.packet_queue.put(packet)
|
||||
|
||||
|
||||
@ -151,29 +137,21 @@ class APRSDProcessPacketThread(APRSDThread):
|
||||
def __init__(self, packet_queue):
|
||||
self.packet_queue = packet_queue
|
||||
super().__init__("ProcessPKT")
|
||||
if not CONF.enable_sending_ack_packets:
|
||||
LOG.warning(
|
||||
"Sending ack packets is disabled, messages "
|
||||
"will not be acknowledged.",
|
||||
)
|
||||
self._loop_cnt = 1
|
||||
|
||||
def process_ack_packet(self, packet):
|
||||
"""We got an ack for a message, no need to resend it."""
|
||||
ack_num = packet.msgNo
|
||||
LOG.debug(f"Got ack for message {ack_num}")
|
||||
collector.PacketCollector().rx(packet)
|
||||
|
||||
def process_piggyback_ack(self, packet):
|
||||
"""We got an ack embedded in a packet."""
|
||||
ack_num = packet.ackMsgNo
|
||||
LOG.debug(f"Got PiggyBackAck for message {ack_num}")
|
||||
collector.PacketCollector().rx(packet)
|
||||
LOG.info(f"Got ack for message {ack_num}")
|
||||
pkt_tracker = packets.PacketTrack()
|
||||
pkt_tracker.remove(ack_num)
|
||||
|
||||
def process_reject_packet(self, packet):
|
||||
"""We got a reject message for a packet. Stop sending the message."""
|
||||
ack_num = packet.msgNo
|
||||
LOG.debug(f"Got REJECT for message {ack_num}")
|
||||
collector.PacketCollector().rx(packet)
|
||||
LOG.info(f"Got REJECT for message {ack_num}")
|
||||
pkt_tracker = packets.PacketTrack()
|
||||
pkt_tracker.remove(ack_num)
|
||||
|
||||
def loop(self):
|
||||
try:
|
||||
@ -182,11 +160,12 @@ class APRSDProcessPacketThread(APRSDThread):
|
||||
self.process_packet(packet)
|
||||
except queue.Empty:
|
||||
pass
|
||||
self._loop_cnt += 1
|
||||
return True
|
||||
|
||||
def process_packet(self, packet):
|
||||
"""Process a packet received from aprs-is server."""
|
||||
LOG.debug(f"ProcessPKT-LOOP {self.loop_count}")
|
||||
LOG.debug(f"ProcessPKT-LOOP {self._loop_cnt}")
|
||||
our_call = CONF.callsign.lower()
|
||||
|
||||
from_call = packet.from_call
|
||||
@ -209,10 +188,6 @@ class APRSDProcessPacketThread(APRSDThread):
|
||||
):
|
||||
self.process_reject_packet(packet)
|
||||
else:
|
||||
if hasattr(packet, "ackMsgNo") and packet.ackMsgNo:
|
||||
# we got an ack embedded in this packet
|
||||
# we need to handle the ack
|
||||
self.process_piggyback_ack(packet)
|
||||
# Only ack messages that were sent directly to us
|
||||
if isinstance(packet, packets.MessagePacket):
|
||||
if to_call and to_call.lower() == our_call:
|
||||
|
@ -1,44 +0,0 @@
|
||||
import logging
|
||||
import threading
|
||||
import time
|
||||
|
||||
from oslo_config import cfg
|
||||
import wrapt
|
||||
|
||||
from aprsd.stats import collector
|
||||
from aprsd.threads import APRSDThread
|
||||
from aprsd.utils import objectstore
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("APRSD")
|
||||
|
||||
|
||||
class StatsStore(objectstore.ObjectStoreMixin):
|
||||
"""Container to save the stats from the collector."""
|
||||
lock = threading.Lock()
|
||||
data = {}
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def add(self, stats: dict):
|
||||
self.data = stats
|
||||
|
||||
|
||||
class APRSDStatsStoreThread(APRSDThread):
|
||||
"""Save APRSD Stats to disk periodically."""
|
||||
|
||||
# how often in seconds to write the file
|
||||
save_interval = 10
|
||||
|
||||
def __init__(self):
|
||||
super().__init__("StatsStore")
|
||||
|
||||
def loop(self):
|
||||
if self.loop_count % self.save_interval == 0:
|
||||
stats = collector.Collector().collect()
|
||||
ss = StatsStore()
|
||||
ss.add(stats)
|
||||
ss.save()
|
||||
|
||||
time.sleep(1)
|
||||
return True
|
@ -1,5 +1,4 @@
|
||||
import logging
|
||||
import threading
|
||||
import time
|
||||
|
||||
from oslo_config import cfg
|
||||
@ -7,14 +6,11 @@ from rush import quota, throttle
|
||||
from rush.contrib import decorator
|
||||
from rush.limiters import periodic
|
||||
from rush.stores import dictionary
|
||||
import wrapt
|
||||
|
||||
from aprsd import client
|
||||
from aprsd import conf # noqa
|
||||
from aprsd import threads as aprsd_threads
|
||||
from aprsd.client import client_factory
|
||||
from aprsd.packets import collector, core
|
||||
from aprsd.packets import log as packet_log
|
||||
from aprsd.packets import tracker
|
||||
from aprsd.packets import core, tracker
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
@ -39,24 +35,16 @@ ack_t = throttle.Throttle(
|
||||
|
||||
msg_throttle_decorator = decorator.ThrottleDecorator(throttle=msg_t)
|
||||
ack_throttle_decorator = decorator.ThrottleDecorator(throttle=ack_t)
|
||||
s_lock = threading.Lock()
|
||||
|
||||
|
||||
@wrapt.synchronized(s_lock)
|
||||
@msg_throttle_decorator.sleep_and_retry
|
||||
def send(packet: core.Packet, direct=False, aprs_client=None):
|
||||
"""Send a packet either in a thread or directly to the client."""
|
||||
# prepare the packet for sending.
|
||||
# This constructs the packet.raw
|
||||
packet.prepare()
|
||||
# Have to call the collector to track the packet
|
||||
# After prepare, as prepare assigns the msgNo
|
||||
collector.PacketCollector().tx(packet)
|
||||
if isinstance(packet, core.AckPacket):
|
||||
if CONF.enable_sending_ack_packets:
|
||||
_send_ack(packet, direct=direct, aprs_client=aprs_client)
|
||||
else:
|
||||
LOG.info("Sending ack packets is disabled. Not sending AckPacket.")
|
||||
_send_ack(packet, direct=direct, aprs_client=aprs_client)
|
||||
else:
|
||||
_send_packet(packet, direct=direct, aprs_client=aprs_client)
|
||||
|
||||
@ -83,18 +71,11 @@ def _send_direct(packet, aprs_client=None):
|
||||
if aprs_client:
|
||||
cl = aprs_client
|
||||
else:
|
||||
cl = client_factory.create()
|
||||
cl = client.factory.create()
|
||||
|
||||
packet.update_timestamp()
|
||||
packet_log.log(packet, tx=True)
|
||||
try:
|
||||
cl.send(packet)
|
||||
except Exception as e:
|
||||
LOG.error(f"Failed to send packet: {packet}")
|
||||
LOG.error(e)
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
packet.log(header="TX")
|
||||
cl.send(packet)
|
||||
|
||||
|
||||
class SendPacketThread(aprsd_threads.APRSDThread):
|
||||
@ -102,7 +83,10 @@ class SendPacketThread(aprsd_threads.APRSDThread):
|
||||
|
||||
def __init__(self, packet):
|
||||
self.packet = packet
|
||||
super().__init__(f"TX-{packet.to_call}-{self.packet.msgNo}")
|
||||
name = self.packet.raw[:5]
|
||||
super().__init__(f"TXPKT-{self.packet.msgNo}-{name}")
|
||||
pkt_tracker = tracker.PacketTrack()
|
||||
pkt_tracker.add(packet)
|
||||
|
||||
def loop(self):
|
||||
"""Loop until a message is acked or it gets delayed.
|
||||
@ -128,7 +112,7 @@ class SendPacketThread(aprsd_threads.APRSDThread):
|
||||
return False
|
||||
else:
|
||||
send_now = False
|
||||
if packet.send_count >= packet.retry_count:
|
||||
if packet.send_count == packet.retry_count:
|
||||
# we reached the send limit, don't send again
|
||||
# TODO(hemna) - Need to put this in a delayed queue?
|
||||
LOG.info(
|
||||
@ -137,7 +121,8 @@ class SendPacketThread(aprsd_threads.APRSDThread):
|
||||
"Message Send Complete. Max attempts reached"
|
||||
f" {packet.retry_count}",
|
||||
)
|
||||
pkt_tracker.remove(packet.msgNo)
|
||||
if not packet.allow_delay:
|
||||
pkt_tracker.remove(packet.msgNo)
|
||||
return False
|
||||
|
||||
# Message is still outstanding and needs to be acked.
|
||||
@ -156,17 +141,8 @@ class SendPacketThread(aprsd_threads.APRSDThread):
|
||||
# no attempt time, so lets send it, and start
|
||||
# tracking the time.
|
||||
packet.last_send_time = int(round(time.time()))
|
||||
sent = False
|
||||
try:
|
||||
sent = _send_direct(packet)
|
||||
except Exception:
|
||||
LOG.error(f"Failed to send packet: {packet}")
|
||||
else:
|
||||
# If an exception happens while sending
|
||||
# we don't want this attempt to count
|
||||
# against the packet
|
||||
if sent:
|
||||
packet.send_count += 1
|
||||
send(packet, direct=True)
|
||||
packet.send_count += 1
|
||||
|
||||
time.sleep(1)
|
||||
# Make sure we get called again.
|
||||
@ -176,24 +152,22 @@ class SendPacketThread(aprsd_threads.APRSDThread):
|
||||
|
||||
class SendAckThread(aprsd_threads.APRSDThread):
|
||||
loop_count: int = 1
|
||||
max_retries = 3
|
||||
|
||||
def __init__(self, packet):
|
||||
self.packet = packet
|
||||
super().__init__(f"TXAck-{packet.to_call}-{self.packet.msgNo}")
|
||||
self.max_retries = CONF.default_ack_send_count
|
||||
super().__init__(f"SendAck-{self.packet.msgNo}")
|
||||
|
||||
def loop(self):
|
||||
"""Separate thread to send acks with retries."""
|
||||
send_now = False
|
||||
if self.packet.send_count == self.max_retries:
|
||||
if self.packet.send_count == self.packet.retry_count:
|
||||
# we reached the send limit, don't send again
|
||||
# TODO(hemna) - Need to put this in a delayed queue?
|
||||
LOG.debug(
|
||||
LOG.info(
|
||||
f"{self.packet.__class__.__name__}"
|
||||
f"({self.packet.msgNo}) "
|
||||
"Send Complete. Max attempts reached"
|
||||
f" {self.max_retries}",
|
||||
f" {self.packet.retry_count}",
|
||||
)
|
||||
return False
|
||||
|
||||
@ -214,18 +188,8 @@ class SendAckThread(aprsd_threads.APRSDThread):
|
||||
send_now = True
|
||||
|
||||
if send_now:
|
||||
sent = False
|
||||
try:
|
||||
sent = _send_direct(self.packet)
|
||||
except Exception:
|
||||
LOG.error(f"Failed to send packet: {self.packet}")
|
||||
else:
|
||||
# If an exception happens while sending
|
||||
# we don't want this attempt to count
|
||||
# against the packet
|
||||
if sent:
|
||||
self.packet.send_count += 1
|
||||
|
||||
send(self.packet, direct=True)
|
||||
self.packet.send_count += 1
|
||||
self.packet.last_send_time = int(round(time.time()))
|
||||
|
||||
time.sleep(1)
|
||||
@ -266,15 +230,7 @@ class BeaconSendThread(aprsd_threads.APRSDThread):
|
||||
comment="APRSD GPS Beacon",
|
||||
symbol=CONF.beacon_symbol,
|
||||
)
|
||||
try:
|
||||
# Only send it once
|
||||
pkt.retry_count = 1
|
||||
send(pkt, direct=True)
|
||||
except Exception as e:
|
||||
LOG.error(f"Failed to send beacon: {e}")
|
||||
client_factory.create().reset()
|
||||
time.sleep(5)
|
||||
|
||||
send(pkt, direct=True)
|
||||
self._loop_cnt += 1
|
||||
time.sleep(1)
|
||||
return True
|
||||
|
@ -1,8 +1,6 @@
|
||||
"""Utilities and helper functions."""
|
||||
|
||||
import errno
|
||||
import functools
|
||||
import math
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
@ -21,18 +19,7 @@ from .ring_buffer import RingBuffer # noqa: F401
|
||||
if sys.version_info.major == 3 and sys.version_info.minor >= 3:
|
||||
from collections.abc import MutableMapping
|
||||
else:
|
||||
from collections.abc import MutableMapping
|
||||
|
||||
|
||||
def singleton(cls):
|
||||
"""Make a class a Singleton class (only one instance)"""
|
||||
@functools.wraps(cls)
|
||||
def wrapper_singleton(*args, **kwargs):
|
||||
if wrapper_singleton.instance is None:
|
||||
wrapper_singleton.instance = cls(*args, **kwargs)
|
||||
return wrapper_singleton.instance
|
||||
wrapper_singleton.instance = None
|
||||
return wrapper_singleton
|
||||
from collections import MutableMapping
|
||||
|
||||
|
||||
def env(*vars, **kwargs):
|
||||
@ -83,16 +70,6 @@ def rgb_from_name(name):
|
||||
return red, green, blue
|
||||
|
||||
|
||||
def hextriplet(colortuple):
|
||||
"""Convert a color tuple to a hex triplet."""
|
||||
return "#" + "".join(f"{i:02X}" for i in colortuple)
|
||||
|
||||
|
||||
def hex_from_name(name):
|
||||
"""Create a hex color from a string."""
|
||||
return hextriplet(rgb_from_name(name))
|
||||
|
||||
|
||||
def human_size(bytes, units=None):
|
||||
"""Returns a human readable string representation of bytes"""
|
||||
if not units:
|
||||
@ -159,6 +136,7 @@ def parse_delta_str(s):
|
||||
|
||||
def load_entry_points(group):
|
||||
"""Load all extensions registered to the given entry point group"""
|
||||
print(f"Loading extensions for group {group}")
|
||||
try:
|
||||
import importlib_metadata
|
||||
except ImportError:
|
||||
@ -172,47 +150,3 @@ def load_entry_points(group):
|
||||
except Exception as e:
|
||||
print(f"Extension {ep.name} of group {group} failed to load with {e}", file=sys.stderr)
|
||||
print(traceback.format_exc(), file=sys.stderr)
|
||||
|
||||
|
||||
def calculate_initial_compass_bearing(start, end):
|
||||
if (type(start) != tuple) or (type(end) != tuple): # noqa: E721
|
||||
raise TypeError("Only tuples are supported as arguments")
|
||||
|
||||
lat1 = math.radians(float(start[0]))
|
||||
lat2 = math.radians(float(end[0]))
|
||||
|
||||
diff_long = math.radians(float(end[1]) - float(start[1]))
|
||||
|
||||
x = math.sin(diff_long) * math.cos(lat2)
|
||||
y = math.cos(lat1) * math.sin(lat2) - (
|
||||
math.sin(lat1)
|
||||
* math.cos(lat2) * math.cos(diff_long)
|
||||
)
|
||||
|
||||
initial_bearing = math.atan2(x, y)
|
||||
|
||||
# Now we have the initial bearing but math.atan2 return values
|
||||
# from -180° to + 180° which is not what we want for a compass bearing
|
||||
# The solution is to normalize the initial bearing as shown below
|
||||
initial_bearing = math.degrees(initial_bearing)
|
||||
compass_bearing = (initial_bearing + 360) % 360
|
||||
|
||||
return compass_bearing
|
||||
|
||||
|
||||
def degrees_to_cardinal(bearing, full_string=False):
|
||||
if full_string:
|
||||
directions = [
|
||||
"North", "North-Northeast", "Northeast", "East-Northeast", "East", "East-Southeast",
|
||||
"Southeast", "South-Southeast", "South", "South-Southwest", "Southwest", "West-Southwest",
|
||||
"West", "West-Northwest", "Northwest", "North-Northwest", "North",
|
||||
]
|
||||
else:
|
||||
directions = [
|
||||
"N", "NNE", "NE", "ENE", "E", "ESE",
|
||||
"SE", "SSE", "S", "SSW", "SW", "WSW",
|
||||
"W", "WNW", "NW", "NNW", "N",
|
||||
]
|
||||
|
||||
cardinal = directions[round(bearing / 22.5)]
|
||||
return cardinal
|
||||
|
@ -1,13 +1,9 @@
|
||||
from multiprocessing import RawValue
|
||||
import random
|
||||
import threading
|
||||
|
||||
import wrapt
|
||||
|
||||
|
||||
MAX_PACKET_ID = 9999
|
||||
|
||||
|
||||
class PacketCounter:
|
||||
"""
|
||||
Global Packet id counter class.
|
||||
@ -21,18 +17,19 @@ class PacketCounter:
|
||||
"""
|
||||
|
||||
_instance = None
|
||||
max_count = 9999
|
||||
lock = threading.Lock()
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
"""Make this a singleton class."""
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls, *args, **kwargs)
|
||||
cls._instance.val = RawValue("i", random.randint(1, MAX_PACKET_ID))
|
||||
cls._instance.val = RawValue("i", 1)
|
||||
return cls._instance
|
||||
|
||||
@wrapt.synchronized(lock)
|
||||
def increment(self):
|
||||
if self.val.value == MAX_PACKET_ID:
|
||||
if self.val.value == self.max_count:
|
||||
self.val.value = 1
|
||||
else:
|
||||
self.val.value += 1
|
||||
|
@ -3,8 +3,6 @@ import decimal
|
||||
import json
|
||||
import sys
|
||||
|
||||
from aprsd.packets import core
|
||||
|
||||
|
||||
class EnhancedJSONEncoder(json.JSONEncoder):
|
||||
def default(self, obj):
|
||||
@ -44,24 +42,6 @@ class EnhancedJSONEncoder(json.JSONEncoder):
|
||||
return super().default(obj)
|
||||
|
||||
|
||||
class SimpleJSONEncoder(json.JSONEncoder):
|
||||
def default(self, obj):
|
||||
if isinstance(obj, datetime.datetime):
|
||||
return obj.isoformat()
|
||||
elif isinstance(obj, datetime.date):
|
||||
return str(obj)
|
||||
elif isinstance(obj, datetime.time):
|
||||
return str(obj)
|
||||
elif isinstance(obj, datetime.timedelta):
|
||||
return str(obj)
|
||||
elif isinstance(obj, decimal.Decimal):
|
||||
return str(obj)
|
||||
elif isinstance(obj, core.Packet):
|
||||
return obj.to_dict()
|
||||
else:
|
||||
return super().default(obj)
|
||||
|
||||
|
||||
class EnhancedJSONDecoder(json.JSONDecoder):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
|
@ -2,7 +2,6 @@ import logging
|
||||
import os
|
||||
import pathlib
|
||||
import pickle
|
||||
import threading
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
@ -26,28 +25,19 @@ class ObjectStoreMixin:
|
||||
aprsd server -f (flush) will wipe all saved objects.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.lock = threading.RLock()
|
||||
|
||||
def __len__(self):
|
||||
with self.lock:
|
||||
return len(self.data)
|
||||
return len(self.data)
|
||||
|
||||
def __iter__(self):
|
||||
with self.lock:
|
||||
return iter(self.data)
|
||||
return iter(self.data)
|
||||
|
||||
def get_all(self):
|
||||
with self.lock:
|
||||
return self.data
|
||||
|
||||
def get(self, key):
|
||||
def get(self, id):
|
||||
with self.lock:
|
||||
return self.data.get(key)
|
||||
|
||||
def copy(self):
|
||||
with self.lock:
|
||||
return self.data.copy()
|
||||
return self.data[id]
|
||||
|
||||
def _init_store(self):
|
||||
if not CONF.enable_save:
|
||||
@ -68,26 +58,31 @@ class ObjectStoreMixin:
|
||||
self.__class__.__name__.lower(),
|
||||
)
|
||||
|
||||
def _dump(self):
|
||||
dump = {}
|
||||
with self.lock:
|
||||
for key in self.data.keys():
|
||||
dump[key] = self.data[key]
|
||||
|
||||
return dump
|
||||
|
||||
def save(self):
|
||||
"""Save any queued to disk?"""
|
||||
if not CONF.enable_save:
|
||||
return
|
||||
self._init_store()
|
||||
save_filename = self._save_filename()
|
||||
if len(self) > 0:
|
||||
LOG.info(
|
||||
f"{self.__class__.__name__}::Saving"
|
||||
f" {len(self)} entries to disk at "
|
||||
f"{save_filename}",
|
||||
f" {len(self)} entries to disk at"
|
||||
f"{CONF.save_location}",
|
||||
)
|
||||
with self.lock:
|
||||
with open(save_filename, "wb+") as fp:
|
||||
pickle.dump(self.data, fp)
|
||||
with open(self._save_filename(), "wb+") as fp:
|
||||
pickle.dump(self._dump(), fp)
|
||||
else:
|
||||
LOG.debug(
|
||||
"{} Nothing to save, flushing old save file '{}'".format(
|
||||
self.__class__.__name__,
|
||||
save_filename,
|
||||
self._save_filename(),
|
||||
),
|
||||
)
|
||||
self.flush()
|
||||
|
@ -1,4 +1,189 @@
|
||||
/* PrismJS 1.29.0
|
||||
https://prismjs.com/download.html#themes=prism-tomorrow&languages=markup+css+clike+javascript+json+json5+log&plugins=show-language+toolbar */
|
||||
code[class*=language-],pre[class*=language-]{color:#ccc;background:0 0;font-family:Consolas,Monaco,'Andale Mono','Ubuntu Mono',monospace;font-size:1em;text-align:left;white-space:pre;word-spacing:normal;word-break:normal;word-wrap:normal;line-height:1.5;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-hyphens:none;-moz-hyphens:none;-ms-hyphens:none;hyphens:none}pre[class*=language-]{padding:1em;margin:.5em 0;overflow:auto}:not(pre)>code[class*=language-],pre[class*=language-]{background:#2d2d2d}:not(pre)>code[class*=language-]{padding:.1em;border-radius:.3em;white-space:normal}.token.block-comment,.token.cdata,.token.comment,.token.doctype,.token.prolog{color:#999}.token.punctuation{color:#ccc}.token.attr-name,.token.deleted,.token.namespace,.token.tag{color:#e2777a}.token.function-name{color:#6196cc}.token.boolean,.token.function,.token.number{color:#f08d49}.token.class-name,.token.constant,.token.property,.token.symbol{color:#f8c555}.token.atrule,.token.builtin,.token.important,.token.keyword,.token.selector{color:#cc99cd}.token.attr-value,.token.char,.token.regex,.token.string,.token.variable{color:#7ec699}.token.entity,.token.operator,.token.url{color:#67cdcc}.token.bold,.token.important{font-weight:700}.token.italic{font-style:italic}.token.entity{cursor:help}.token.inserted{color:green}
|
||||
div.code-toolbar{position:relative}div.code-toolbar>.toolbar{position:absolute;z-index:10;top:.3em;right:.2em;transition:opacity .3s ease-in-out;opacity:0}div.code-toolbar:hover>.toolbar{opacity:1}div.code-toolbar:focus-within>.toolbar{opacity:1}div.code-toolbar>.toolbar>.toolbar-item{display:inline-block}div.code-toolbar>.toolbar>.toolbar-item>a{cursor:pointer}div.code-toolbar>.toolbar>.toolbar-item>button{background:0 0;border:0;color:inherit;font:inherit;line-height:normal;overflow:visible;padding:0;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none}div.code-toolbar>.toolbar>.toolbar-item>a,div.code-toolbar>.toolbar>.toolbar-item>button,div.code-toolbar>.toolbar>.toolbar-item>span{color:#bbb;font-size:.8em;padding:0 .5em;background:#f5f2f0;background:rgba(224,224,224,.2);box-shadow:0 2px 0 0 rgba(0,0,0,.2);border-radius:.5em}div.code-toolbar>.toolbar>.toolbar-item>a:focus,div.code-toolbar>.toolbar>.toolbar-item>a:hover,div.code-toolbar>.toolbar>.toolbar-item>button:focus,div.code-toolbar>.toolbar>.toolbar-item>button:hover,div.code-toolbar>.toolbar>.toolbar-item>span:focus,div.code-toolbar>.toolbar>.toolbar-item>span:hover{color:inherit;text-decoration:none}
|
||||
/* PrismJS 1.24.1
|
||||
https://prismjs.com/download.html#themes=prism-tomorrow&languages=markup+css+clike+javascript+log&plugins=show-language+toolbar */
|
||||
/**
|
||||
* prism.js tomorrow night eighties for JavaScript, CoffeeScript, CSS and HTML
|
||||
* Based on https://github.com/chriskempson/tomorrow-theme
|
||||
* @author Rose Pritchard
|
||||
*/
|
||||
|
||||
code[class*="language-"],
|
||||
pre[class*="language-"] {
|
||||
color: #ccc;
|
||||
background: none;
|
||||
font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace;
|
||||
font-size: 1em;
|
||||
text-align: left;
|
||||
white-space: pre;
|
||||
word-spacing: normal;
|
||||
word-break: normal;
|
||||
word-wrap: normal;
|
||||
line-height: 1.5;
|
||||
|
||||
-moz-tab-size: 4;
|
||||
-o-tab-size: 4;
|
||||
tab-size: 4;
|
||||
|
||||
-webkit-hyphens: none;
|
||||
-moz-hyphens: none;
|
||||
-ms-hyphens: none;
|
||||
hyphens: none;
|
||||
|
||||
}
|
||||
|
||||
/* Code blocks */
|
||||
pre[class*="language-"] {
|
||||
padding: 1em;
|
||||
margin: .5em 0;
|
||||
overflow: auto;
|
||||
}
|
||||
|
||||
:not(pre) > code[class*="language-"],
|
||||
pre[class*="language-"] {
|
||||
background: #2d2d2d;
|
||||
}
|
||||
|
||||
/* Inline code */
|
||||
:not(pre) > code[class*="language-"] {
|
||||
padding: .1em;
|
||||
border-radius: .3em;
|
||||
white-space: normal;
|
||||
}
|
||||
|
||||
.token.comment,
|
||||
.token.block-comment,
|
||||
.token.prolog,
|
||||
.token.doctype,
|
||||
.token.cdata {
|
||||
color: #999;
|
||||
}
|
||||
|
||||
.token.punctuation {
|
||||
color: #ccc;
|
||||
}
|
||||
|
||||
.token.tag,
|
||||
.token.attr-name,
|
||||
.token.namespace,
|
||||
.token.deleted {
|
||||
color: #e2777a;
|
||||
}
|
||||
|
||||
.token.function-name {
|
||||
color: #6196cc;
|
||||
}
|
||||
|
||||
.token.boolean,
|
||||
.token.number,
|
||||
.token.function {
|
||||
color: #f08d49;
|
||||
}
|
||||
|
||||
.token.property,
|
||||
.token.class-name,
|
||||
.token.constant,
|
||||
.token.symbol {
|
||||
color: #f8c555;
|
||||
}
|
||||
|
||||
.token.selector,
|
||||
.token.important,
|
||||
.token.atrule,
|
||||
.token.keyword,
|
||||
.token.builtin {
|
||||
color: #cc99cd;
|
||||
}
|
||||
|
||||
.token.string,
|
||||
.token.char,
|
||||
.token.attr-value,
|
||||
.token.regex,
|
||||
.token.variable {
|
||||
color: #7ec699;
|
||||
}
|
||||
|
||||
.token.operator,
|
||||
.token.entity,
|
||||
.token.url {
|
||||
color: #67cdcc;
|
||||
}
|
||||
|
||||
.token.important,
|
||||
.token.bold {
|
||||
font-weight: bold;
|
||||
}
|
||||
.token.italic {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.token.entity {
|
||||
cursor: help;
|
||||
}
|
||||
|
||||
.token.inserted {
|
||||
color: green;
|
||||
}
|
||||
|
||||
div.code-toolbar {
|
||||
position: relative;
|
||||
}
|
||||
|
||||
div.code-toolbar > .toolbar {
|
||||
position: absolute;
|
||||
top: .3em;
|
||||
right: .2em;
|
||||
transition: opacity 0.3s ease-in-out;
|
||||
opacity: 0;
|
||||
}
|
||||
|
||||
div.code-toolbar:hover > .toolbar {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
/* Separate line b/c rules are thrown out if selector is invalid.
|
||||
IE11 and old Edge versions don't support :focus-within. */
|
||||
div.code-toolbar:focus-within > .toolbar {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
div.code-toolbar > .toolbar > .toolbar-item {
|
||||
display: inline-block;
|
||||
}
|
||||
|
||||
div.code-toolbar > .toolbar > .toolbar-item > a {
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
div.code-toolbar > .toolbar > .toolbar-item > button {
|
||||
background: none;
|
||||
border: 0;
|
||||
color: inherit;
|
||||
font: inherit;
|
||||
line-height: normal;
|
||||
overflow: visible;
|
||||
padding: 0;
|
||||
-webkit-user-select: none; /* for button */
|
||||
-moz-user-select: none;
|
||||
-ms-user-select: none;
|
||||
}
|
||||
|
||||
div.code-toolbar > .toolbar > .toolbar-item > a,
|
||||
div.code-toolbar > .toolbar > .toolbar-item > button,
|
||||
div.code-toolbar > .toolbar > .toolbar-item > span {
|
||||
color: #bbb;
|
||||
font-size: .8em;
|
||||
padding: 0 .5em;
|
||||
background: #f5f2f0;
|
||||
background: rgba(224, 224, 224, 0.2);
|
||||
box-shadow: 0 2px 0 0 rgba(0,0,0,0.2);
|
||||
border-radius: .5em;
|
||||
}
|
||||
|
||||
div.code-toolbar > .toolbar > .toolbar-item > a:hover,
|
||||
div.code-toolbar > .toolbar > .toolbar-item > a:focus,
|
||||
div.code-toolbar > .toolbar > .toolbar-item > button:hover,
|
||||
div.code-toolbar > .toolbar > .toolbar-item > button:focus,
|
||||
div.code-toolbar > .toolbar > .toolbar-item > span:hover,
|
||||
div.code-toolbar > .toolbar > .toolbar-item > span:focus {
|
||||
color: inherit;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
@ -219,17 +219,15 @@ function updateQuadData(chart, label, first, second, third, fourth) {
|
||||
}
|
||||
|
||||
function update_stats( data ) {
|
||||
our_callsign = data["APRSDStats"]["callsign"];
|
||||
$("#version").text( data["APRSDStats"]["version"] );
|
||||
our_callsign = data["stats"]["aprsd"]["callsign"];
|
||||
$("#version").text( data["stats"]["aprsd"]["version"] );
|
||||
$("#aprs_connection").html( data["aprs_connection"] );
|
||||
$("#uptime").text( "uptime: " + data["APRSDStats"]["uptime"] );
|
||||
$("#uptime").text( "uptime: " + data["stats"]["aprsd"]["uptime"] );
|
||||
const html_pretty = Prism.highlight(JSON.stringify(data, null, '\t'), Prism.languages.json, 'json');
|
||||
$("#jsonstats").html(html_pretty);
|
||||
short_time = data["time"].split(/\s(.+)/)[1];
|
||||
packet_list = data["PacketList"]["packets"];
|
||||
updateDualData(packets_chart, short_time, data["PacketList"]["sent"], data["PacketList"]["received"]);
|
||||
updateQuadData(message_chart, short_time, packet_list["MessagePacket"]["tx"], packet_list["MessagePacket"]["rx"],
|
||||
packet_list["AckPacket"]["tx"], packet_list["AckPacket"]["rx"]);
|
||||
updateDualData(email_chart, short_time, data["EmailStats"]["sent"], data["EmailStats"]["recieved"]);
|
||||
updateDualData(memory_chart, short_time, data["APRSDStats"]["memory_peak"], data["APRSDStats"]["memory_current"]);
|
||||
updateDualData(packets_chart, short_time, data["stats"]["packets"]["sent"], data["stats"]["packets"]["received"]);
|
||||
updateQuadData(message_chart, short_time, data["stats"]["messages"]["sent"], data["stats"]["messages"]["received"], data["stats"]["messages"]["ack_sent"], data["stats"]["messages"]["ack_recieved"]);
|
||||
updateDualData(email_chart, short_time, data["stats"]["email"]["sent"], data["stats"]["email"]["recieved"]);
|
||||
updateDualData(memory_chart, short_time, data["stats"]["aprsd"]["memory_peak"], data["stats"]["aprsd"]["memory_current"]);
|
||||
}
|
||||
|
@ -8,8 +8,6 @@ var packet_types_data = {};
|
||||
var mem_current = []
|
||||
var mem_peak = []
|
||||
|
||||
var thread_current = []
|
||||
|
||||
|
||||
function start_charts() {
|
||||
console.log("start_charts() called");
|
||||
@ -19,7 +17,6 @@ function start_charts() {
|
||||
create_messages_chart();
|
||||
create_ack_chart();
|
||||
create_memory_chart();
|
||||
create_thread_chart();
|
||||
}
|
||||
|
||||
|
||||
@ -261,49 +258,6 @@ function create_memory_chart() {
|
||||
memory_chart.setOption(option);
|
||||
}
|
||||
|
||||
function create_thread_chart() {
|
||||
thread_canvas = document.getElementById('threadChart');
|
||||
thread_chart = echarts.init(thread_canvas);
|
||||
|
||||
// Specify the configuration items and data for the chart
|
||||
var option = {
|
||||
title: {
|
||||
text: 'Active Threads'
|
||||
},
|
||||
legend: {},
|
||||
tooltip: {
|
||||
trigger: 'axis'
|
||||
},
|
||||
toolbox: {
|
||||
show: true,
|
||||
feature: {
|
||||
mark : {show: true},
|
||||
dataView : {show: true, readOnly: false},
|
||||
magicType : {show: true, type: ['line', 'bar']},
|
||||
restore : {show: true},
|
||||
saveAsImage : {show: true}
|
||||
}
|
||||
},
|
||||
calculable: true,
|
||||
xAxis: { type: 'time' },
|
||||
yAxis: { },
|
||||
series: [
|
||||
{
|
||||
name: 'current',
|
||||
type: 'line',
|
||||
smooth: true,
|
||||
color: 'red',
|
||||
encode: {
|
||||
x: 'timestamp',
|
||||
y: 'current' // refer sensor 1 value
|
||||
}
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
thread_chart.setOption(option);
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
@ -373,6 +327,7 @@ function updatePacketTypesChart() {
|
||||
option = {
|
||||
series: series
|
||||
}
|
||||
console.log(option)
|
||||
packet_types_chart.setOption(option);
|
||||
}
|
||||
|
||||
@ -417,21 +372,6 @@ function updateMemChart(time, current, peak) {
|
||||
memory_chart.setOption(option);
|
||||
}
|
||||
|
||||
function updateThreadChart(time, threads) {
|
||||
keys = Object.keys(threads);
|
||||
thread_count = keys.length;
|
||||
thread_current.push([time, thread_count]);
|
||||
option = {
|
||||
series: [
|
||||
{
|
||||
name: 'current',
|
||||
data: thread_current,
|
||||
}
|
||||
]
|
||||
}
|
||||
thread_chart.setOption(option);
|
||||
}
|
||||
|
||||
function updateMessagesChart() {
|
||||
updateTypeChart(message_chart, "MessagePacket")
|
||||
}
|
||||
@ -441,24 +381,22 @@ function updateAcksChart() {
|
||||
}
|
||||
|
||||
function update_stats( data ) {
|
||||
console.log("update_stats() echarts.js called")
|
||||
stats = data["stats"];
|
||||
our_callsign = stats["APRSDStats"]["callsign"];
|
||||
$("#version").text( stats["APRSDStats"]["version"] );
|
||||
$("#aprs_connection").html( stats["aprs_connection"] );
|
||||
$("#uptime").text( "uptime: " + stats["APRSDStats"]["uptime"] );
|
||||
console.log(data);
|
||||
our_callsign = data["stats"]["aprsd"]["callsign"];
|
||||
$("#version").text( data["stats"]["aprsd"]["version"] );
|
||||
$("#aprs_connection").html( data["aprs_connection"] );
|
||||
$("#uptime").text( "uptime: " + data["stats"]["aprsd"]["uptime"] );
|
||||
const html_pretty = Prism.highlight(JSON.stringify(data, null, '\t'), Prism.languages.json, 'json');
|
||||
$("#jsonstats").html(html_pretty);
|
||||
|
||||
t = Date.parse(data["time"]);
|
||||
ts = new Date(t);
|
||||
updatePacketData(packets_chart, ts, stats["PacketList"]["tx"], stats["PacketList"]["rx"]);
|
||||
updatePacketTypesData(ts, stats["PacketList"]["types"]);
|
||||
updatePacketData(packets_chart, ts, data["stats"]["packets"]["sent"], data["stats"]["packets"]["received"]);
|
||||
updatePacketTypesData(ts, data["stats"]["packets"]["types"]);
|
||||
updatePacketTypesChart();
|
||||
updateMessagesChart();
|
||||
updateAcksChart();
|
||||
updateMemChart(ts, stats["APRSDStats"]["memory_current"], stats["APRSDStats"]["memory_peak"]);
|
||||
updateThreadChart(ts, stats["APRSDThreadList"]);
|
||||
updateMemChart(ts, data["stats"]["aprsd"]["memory_current"], data["stats"]["aprsd"]["memory_peak"]);
|
||||
//updateQuadData(message_chart, short_time, data["stats"]["messages"]["sent"], data["stats"]["messages"]["received"], data["stats"]["messages"]["ack_sent"], data["stats"]["messages"]["ack_recieved"]);
|
||||
//updateDualData(email_chart, short_time, data["stats"]["email"]["sent"], data["stats"]["email"]["recieved"]);
|
||||
//updateDualData(memory_chart, short_time, data["stats"]["aprsd"]["memory_peak"], data["stats"]["aprsd"]["memory_current"]);
|
||||
|
@ -24,15 +24,11 @@ function ord(str){return str.charCodeAt(0);}
|
||||
|
||||
|
||||
function update_watchlist( data ) {
|
||||
// Update the watch list
|
||||
stats = data["stats"];
|
||||
if (stats.hasOwnProperty("WatchList") == false) {
|
||||
return
|
||||
}
|
||||
// Update the watch list
|
||||
var watchdiv = $("#watchDiv");
|
||||
var html_str = '<table class="ui celled striped table"><thead><tr><th>HAM Callsign</th><th>Age since last seen by APRSD</th></tr></thead><tbody>'
|
||||
watchdiv.html('')
|
||||
jQuery.each(stats["WatchList"], function(i, val) {
|
||||
jQuery.each(data["stats"]["aprsd"]["watch_list"], function(i, val) {
|
||||
html_str += '<tr><td class="collapsing"><img id="callsign_'+i+'" class="aprsd_1"></img>' + i + '</td><td>' + val["last"] + '</td></tr>'
|
||||
});
|
||||
html_str += "</tbody></table>";
|
||||
@ -64,16 +60,12 @@ function update_watchlist_from_packet(callsign, val) {
|
||||
}
|
||||
|
||||
function update_seenlist( data ) {
|
||||
stats = data["stats"];
|
||||
if (stats.hasOwnProperty("SeenList") == false) {
|
||||
return
|
||||
}
|
||||
var seendiv = $("#seenDiv");
|
||||
var html_str = '<table class="ui celled striped table">'
|
||||
html_str += '<thead><tr><th>HAM Callsign</th><th>Age since last seen by APRSD</th>'
|
||||
html_str += '<th>Number of packets RX</th></tr></thead><tbody>'
|
||||
seendiv.html('')
|
||||
var seen_list = stats["SeenList"]
|
||||
var seen_list = data["stats"]["aprsd"]["seen_list"]
|
||||
var len = Object.keys(seen_list).length
|
||||
$('#seen_count').html(len)
|
||||
jQuery.each(seen_list, function(i, val) {
|
||||
@ -87,10 +79,6 @@ function update_seenlist( data ) {
|
||||
}
|
||||
|
||||
function update_plugins( data ) {
|
||||
stats = data["stats"];
|
||||
if (stats.hasOwnProperty("PluginManager") == false) {
|
||||
return
|
||||
}
|
||||
var plugindiv = $("#pluginDiv");
|
||||
var html_str = '<table class="ui celled striped table"><thead><tr>'
|
||||
html_str += '<th>Plugin Name</th><th>Plugin Enabled?</th>'
|
||||
@ -99,7 +87,7 @@ function update_plugins( data ) {
|
||||
html_str += '</tr></thead><tbody>'
|
||||
plugindiv.html('')
|
||||
|
||||
var plugins = stats["PluginManager"];
|
||||
var plugins = data["stats"]["plugins"];
|
||||
var keys = Object.keys(plugins);
|
||||
keys.sort();
|
||||
for (var i=0; i<keys.length; i++) { // now lets iterate in sort order
|
||||
@ -113,42 +101,14 @@ function update_plugins( data ) {
|
||||
plugindiv.append(html_str);
|
||||
}
|
||||
|
||||
function update_threads( data ) {
|
||||
stats = data["stats"];
|
||||
if (stats.hasOwnProperty("APRSDThreadList") == false) {
|
||||
return
|
||||
}
|
||||
var threadsdiv = $("#threadsDiv");
|
||||
var countdiv = $("#thread_count");
|
||||
var html_str = '<table class="ui celled striped table"><thead><tr>'
|
||||
html_str += '<th>Thread Name</th><th>Alive?</th>'
|
||||
html_str += '<th>Age</th><th>Loop Count</th>'
|
||||
html_str += '</tr></thead><tbody>'
|
||||
threadsdiv.html('')
|
||||
|
||||
var threads = stats["APRSDThreadList"];
|
||||
var keys = Object.keys(threads);
|
||||
countdiv.html(keys.length);
|
||||
keys.sort();
|
||||
for (var i=0; i<keys.length; i++) { // now lets iterate in sort order
|
||||
var key = keys[i];
|
||||
var val = threads[key];
|
||||
html_str += '<tr><td class="collapsing">' + key + '</td>';
|
||||
html_str += '<td>' + val["alive"] + '</td><td>' + val["age"] + '</td>';
|
||||
html_str += '<td>' + val["loop_count"] + '</td></tr>';
|
||||
}
|
||||
html_str += "</tbody></table>";
|
||||
threadsdiv.append(html_str);
|
||||
}
|
||||
|
||||
function update_packets( data ) {
|
||||
var packetsdiv = $("#packetsDiv");
|
||||
//nuke the contents first, then add to it.
|
||||
if (size_dict(packet_list) == 0 && size_dict(data) > 0) {
|
||||
packetsdiv.html('')
|
||||
}
|
||||
jQuery.each(data.packets, function(i, val) {
|
||||
pkt = val;
|
||||
jQuery.each(data, function(i, val) {
|
||||
pkt = JSON.parse(val);
|
||||
|
||||
update_watchlist_from_packet(pkt['from_call'], pkt);
|
||||
if ( packet_list.hasOwnProperty(pkt['timestamp']) == false ) {
|
||||
@ -207,7 +167,6 @@ function start_update() {
|
||||
update_watchlist(data);
|
||||
update_seenlist(data);
|
||||
update_plugins(data);
|
||||
update_threads(data);
|
||||
},
|
||||
complete: function() {
|
||||
setTimeout(statsworker, 10000);
|
||||
|
File diff suppressed because one or more lines are too long
57
aprsd/web/admin/static/json-viewer/jquery.json-viewer.css
Normal file
57
aprsd/web/admin/static/json-viewer/jquery.json-viewer.css
Normal file
@ -0,0 +1,57 @@
|
||||
/* Root element */
|
||||
.json-document {
|
||||
padding: 1em 2em;
|
||||
}
|
||||
|
||||
/* Syntax highlighting for JSON objects */
|
||||
ul.json-dict, ol.json-array {
|
||||
list-style-type: none;
|
||||
margin: 0 0 0 1px;
|
||||
border-left: 1px dotted #ccc;
|
||||
padding-left: 2em;
|
||||
}
|
||||
.json-string {
|
||||
color: #0B7500;
|
||||
}
|
||||
.json-literal {
|
||||
color: #1A01CC;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* Toggle button */
|
||||
a.json-toggle {
|
||||
position: relative;
|
||||
color: inherit;
|
||||
text-decoration: none;
|
||||
}
|
||||
a.json-toggle:focus {
|
||||
outline: none;
|
||||
}
|
||||
a.json-toggle:before {
|
||||
font-size: 1.1em;
|
||||
color: #c0c0c0;
|
||||
content: "\25BC"; /* down arrow */
|
||||
position: absolute;
|
||||
display: inline-block;
|
||||
width: 1em;
|
||||
text-align: center;
|
||||
line-height: 1em;
|
||||
left: -1.2em;
|
||||
}
|
||||
a.json-toggle:hover:before {
|
||||
color: #aaa;
|
||||
}
|
||||
a.json-toggle.collapsed:before {
|
||||
/* Use rotated down arrow, prevents right arrow appearing smaller than down arrow in some browsers */
|
||||
transform: rotate(-90deg);
|
||||
}
|
||||
|
||||
/* Collapsable placeholder links */
|
||||
a.json-placeholder {
|
||||
color: #aaa;
|
||||
padding: 0 1em;
|
||||
text-decoration: none;
|
||||
}
|
||||
a.json-placeholder:hover {
|
||||
text-decoration: underline;
|
||||
}
|
158
aprsd/web/admin/static/json-viewer/jquery.json-viewer.js
Normal file
158
aprsd/web/admin/static/json-viewer/jquery.json-viewer.js
Normal file
@ -0,0 +1,158 @@
|
||||
/**
|
||||
* jQuery json-viewer
|
||||
* @author: Alexandre Bodelot <alexandre.bodelot@gmail.com>
|
||||
* @link: https://github.com/abodelot/jquery.json-viewer
|
||||
*/
|
||||
(function($) {
|
||||
|
||||
/**
|
||||
* Check if arg is either an array with at least 1 element, or a dict with at least 1 key
|
||||
* @return boolean
|
||||
*/
|
||||
function isCollapsable(arg) {
|
||||
return arg instanceof Object && Object.keys(arg).length > 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a string represents a valid url
|
||||
* @return boolean
|
||||
*/
|
||||
function isUrl(string) {
|
||||
var urlRegexp = /^(https?:\/\/|ftps?:\/\/)?([a-z0-9%-]+\.){1,}([a-z0-9-]+)?(:(\d{1,5}))?(\/([a-z0-9\-._~:/?#[\]@!$&'()*+,;=%]+)?)?$/i;
|
||||
return urlRegexp.test(string);
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform a json object into html representation
|
||||
* @return string
|
||||
*/
|
||||
function json2html(json, options) {
|
||||
var html = '';
|
||||
if (typeof json === 'string') {
|
||||
// Escape tags and quotes
|
||||
json = json
|
||||
.replace(/&/g, '&')
|
||||
.replace(/</g, '<')
|
||||
.replace(/>/g, '>')
|
||||
.replace(/'/g, ''')
|
||||
.replace(/"/g, '"');
|
||||
|
||||
if (options.withLinks && isUrl(json)) {
|
||||
html += '<a href="' + json + '" class="json-string" target="_blank">' + json + '</a>';
|
||||
} else {
|
||||
// Escape double quotes in the rendered non-URL string.
|
||||
json = json.replace(/"/g, '\\"');
|
||||
html += '<span class="json-string">"' + json + '"</span>';
|
||||
}
|
||||
} else if (typeof json === 'number') {
|
||||
html += '<span class="json-literal">' + json + '</span>';
|
||||
} else if (typeof json === 'boolean') {
|
||||
html += '<span class="json-literal">' + json + '</span>';
|
||||
} else if (json === null) {
|
||||
html += '<span class="json-literal">null</span>';
|
||||
} else if (json instanceof Array) {
|
||||
if (json.length > 0) {
|
||||
html += '[<ol class="json-array">';
|
||||
for (var i = 0; i < json.length; ++i) {
|
||||
html += '<li>';
|
||||
// Add toggle button if item is collapsable
|
||||
if (isCollapsable(json[i])) {
|
||||
html += '<a href class="json-toggle"></a>';
|
||||
}
|
||||
html += json2html(json[i], options);
|
||||
// Add comma if item is not last
|
||||
if (i < json.length - 1) {
|
||||
html += ',';
|
||||
}
|
||||
html += '</li>';
|
||||
}
|
||||
html += '</ol>]';
|
||||
} else {
|
||||
html += '[]';
|
||||
}
|
||||
} else if (typeof json === 'object') {
|
||||
var keyCount = Object.keys(json).length;
|
||||
if (keyCount > 0) {
|
||||
html += '{<ul class="json-dict">';
|
||||
for (var key in json) {
|
||||
if (Object.prototype.hasOwnProperty.call(json, key)) {
|
||||
html += '<li>';
|
||||
var keyRepr = options.withQuotes ?
|
||||
'<span class="json-string">"' + key + '"</span>' : key;
|
||||
// Add toggle button if item is collapsable
|
||||
if (isCollapsable(json[key])) {
|
||||
html += '<a href class="json-toggle">' + keyRepr + '</a>';
|
||||
} else {
|
||||
html += keyRepr;
|
||||
}
|
||||
html += ': ' + json2html(json[key], options);
|
||||
// Add comma if item is not last
|
||||
if (--keyCount > 0) {
|
||||
html += ',';
|
||||
}
|
||||
html += '</li>';
|
||||
}
|
||||
}
|
||||
html += '</ul>}';
|
||||
} else {
|
||||
html += '{}';
|
||||
}
|
||||
}
|
||||
return html;
|
||||
}
|
||||
|
||||
/**
|
||||
* jQuery plugin method
|
||||
* @param json: a javascript object
|
||||
* @param options: an optional options hash
|
||||
*/
|
||||
$.fn.jsonViewer = function(json, options) {
|
||||
// Merge user options with default options
|
||||
options = Object.assign({}, {
|
||||
collapsed: false,
|
||||
rootCollapsable: true,
|
||||
withQuotes: false,
|
||||
withLinks: true
|
||||
}, options);
|
||||
|
||||
// jQuery chaining
|
||||
return this.each(function() {
|
||||
|
||||
// Transform to HTML
|
||||
var html = json2html(json, options);
|
||||
if (options.rootCollapsable && isCollapsable(json)) {
|
||||
html = '<a href class="json-toggle"></a>' + html;
|
||||
}
|
||||
|
||||
// Insert HTML in target DOM element
|
||||
$(this).html(html);
|
||||
$(this).addClass('json-document');
|
||||
|
||||
// Bind click on toggle buttons
|
||||
$(this).off('click');
|
||||
$(this).on('click', 'a.json-toggle', function() {
|
||||
var target = $(this).toggleClass('collapsed').siblings('ul.json-dict, ol.json-array');
|
||||
target.toggle();
|
||||
if (target.is(':visible')) {
|
||||
target.siblings('.json-placeholder').remove();
|
||||
} else {
|
||||
var count = target.children('li').length;
|
||||
var placeholder = count + (count > 1 ? ' items' : ' item');
|
||||
target.after('<a href class="json-placeholder">' + placeholder + '</a>');
|
||||
}
|
||||
return false;
|
||||
});
|
||||
|
||||
// Simulate click on toggle button when placeholder is clicked
|
||||
$(this).on('click', 'a.json-placeholder', function() {
|
||||
$(this).siblings('a.json-toggle').click();
|
||||
return false;
|
||||
});
|
||||
|
||||
if (options.collapsed == true) {
|
||||
// Trigger click to collapse all nodes
|
||||
$(this).find('a.json-toggle').click();
|
||||
}
|
||||
});
|
||||
};
|
||||
})(jQuery);
|
@ -30,6 +30,7 @@
|
||||
var color = Chart.helpers.color;
|
||||
|
||||
$(document).ready(function() {
|
||||
console.log(initial_stats);
|
||||
start_update();
|
||||
start_charts();
|
||||
init_messages();
|
||||
@ -81,7 +82,6 @@
|
||||
<div class="item" data-tab="seen-tab">Seen List</div>
|
||||
<div class="item" data-tab="watch-tab">Watch List</div>
|
||||
<div class="item" data-tab="plugin-tab">Plugins</div>
|
||||
<div class="item" data-tab="threads-tab">Threads</div>
|
||||
<div class="item" data-tab="config-tab">Config</div>
|
||||
<div class="item" data-tab="log-tab">LogFile</div>
|
||||
<!-- <div class="item" data-tab="oslo-tab">OSLO CONFIG</div> //-->
|
||||
@ -97,6 +97,11 @@
|
||||
<div class="ui segment" style="height: 300px" id="packetsChart"></div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="column">
|
||||
<div class="ui segment" style="height: 300px" id="packetTypesChart"></div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="column">
|
||||
<div class="ui segment" style="height: 300px" id="messagesChart"></div>
|
||||
@ -107,17 +112,8 @@
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="column">
|
||||
<div class="ui segment" style="height: 300px" id="packetTypesChart"></div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="column">
|
||||
<div class="ui segment" style="height: 300px" id="threadChart"></div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="column">
|
||||
<div class="ui segment" style="height: 300px" id="memChart"></div>
|
||||
<div class="ui segment" style="height: 300px" id="memChart">
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<!-- <div class="row">
|
||||
@ -160,13 +156,6 @@
|
||||
<div id="pluginDiv" class="ui mini text">Loading</div>
|
||||
</div>
|
||||
|
||||
<div class="ui bottom attached tab segment" data-tab="threads-tab">
|
||||
<h3 class="ui dividing header">
|
||||
Threads Loaded (<span id="thread_count">{{ thread_count }}</span>)
|
||||
</h3>
|
||||
<div id="threadsDiv" class="ui mini text">Loading</div>
|
||||
</div>
|
||||
|
||||
<div class="ui bottom attached tab segment" data-tab="config-tab">
|
||||
<h3 class="ui dividing header">Config</h3>
|
||||
<pre id="configjson" class="language-json">{{ config_json|safe }}</pre>
|
||||
@ -185,7 +174,7 @@
|
||||
|
||||
<div class="ui bottom attached tab segment" data-tab="raw-tab">
|
||||
<h3 class="ui dividing header">Raw JSON</h3>
|
||||
<pre id="jsonstats" class="language-yaml" style="height:600px;overflow-y:auto;">{{ initial_stats|safe }}</pre>
|
||||
<pre id="jsonstats" class="language-yaml" style="height:600px;overflow-y:auto;">{{ stats|safe }}</pre>
|
||||
</div>
|
||||
|
||||
<div class="ui text container">
|
||||
|
@ -64,11 +64,9 @@ function showError(error) {
|
||||
|
||||
function showPosition(position) {
|
||||
console.log("showPosition Called");
|
||||
path = $('#pkt_path option:selected').val();
|
||||
msg = {
|
||||
'latitude': position.coords.latitude,
|
||||
'longitude': position.coords.longitude,
|
||||
'path': path,
|
||||
'longitude': position.coords.longitude
|
||||
}
|
||||
console.log(msg);
|
||||
$.toast({
|
||||
|
@ -19,10 +19,9 @@ function show_aprs_icon(item, symbol) {
|
||||
function ord(str){return str.charCodeAt(0);}
|
||||
|
||||
function update_stats( data ) {
|
||||
console.log(data);
|
||||
$("#version").text( data["stats"]["APRSDStats"]["version"] );
|
||||
$("#version").text( data["stats"]["aprsd"]["version"] );
|
||||
$("#aprs_connection").html( data["aprs_connection"] );
|
||||
$("#uptime").text( "uptime: " + data["stats"]["APRSDStats"]["uptime"] );
|
||||
$("#uptime").text( "uptime: " + data["stats"]["aprsd"]["uptime"] );
|
||||
short_time = data["time"].split(/\s(.+)/)[1];
|
||||
}
|
||||
|
||||
@ -38,7 +37,7 @@ function start_update() {
|
||||
update_stats(data);
|
||||
},
|
||||
complete: function() {
|
||||
setTimeout(statsworker, 60000);
|
||||
setTimeout(statsworker, 10000);
|
||||
}
|
||||
});
|
||||
})();
|
||||
|
@ -313,7 +313,6 @@ function create_callsign_tab(callsign, active=false) {
|
||||
//item_html += '<button onClick="callsign_select(\''+callsign+'\');" callsign="'+callsign+'" class="nav-link '+active_str+'" id="'+tab_id+'" data-bs-toggle="tab" data-bs-target="#'+tab_content+'" type="button" role="tab" aria-controls="'+callsign+'" aria-selected="true">';
|
||||
item_html += '<button onClick="callsign_select(\''+callsign+'\');" callsign="'+callsign+'" class="nav-link position-relative '+active_str+'" id="'+tab_id+'" data-bs-toggle="tab" data-bs-target="#'+tab_content+'" type="button" role="tab" aria-controls="'+callsign+'" aria-selected="true">';
|
||||
item_html += callsign+' ';
|
||||
item_html += '<span id="'+tab_notify_id+'" class="position-absolute top-0 start-80 translate-middle badge bg-danger border border-light rounded-pill visually-hidden">0</span>';
|
||||
item_html += '<span onclick="delete_tab(\''+callsign+'\');">×</span>';
|
||||
item_html += '</button></li>'
|
||||
|
||||
@ -408,15 +407,13 @@ function append_message(callsign, msg, msg_html) {
|
||||
tab_notify_id = tab_notification_id(callsign, true);
|
||||
// get the current count of notifications
|
||||
count = parseInt($(tab_notify_id).text());
|
||||
if (isNaN(count)) {
|
||||
count = 0;
|
||||
}
|
||||
count += 1;
|
||||
$(tab_notify_id).text(count);
|
||||
$(tab_notify_id).removeClass('visually-hidden');
|
||||
}
|
||||
|
||||
// Find the right div to place the html
|
||||
|
||||
new_callsign = add_callsign(callsign, msg);
|
||||
update_callsign_path(callsign, msg);
|
||||
append_message_html(callsign, msg_html, new_callsign);
|
||||
@ -505,7 +502,7 @@ function sent_msg(msg) {
|
||||
msg_html = create_message_html(d, t, msg['from_call'], msg['to_call'], msg['message_text'], ack_id, msg, false);
|
||||
append_message(msg['to_call'], msg, msg_html);
|
||||
save_data();
|
||||
scroll_main_content(msg['to_call']);
|
||||
scroll_main_content(msg['from_call']);
|
||||
}
|
||||
|
||||
function from_msg(msg) {
|
||||
|
57
aprsd/web/chat/static/json-viewer/jquery.json-viewer.css
Normal file
57
aprsd/web/chat/static/json-viewer/jquery.json-viewer.css
Normal file
@ -0,0 +1,57 @@
|
||||
/* Root element */
|
||||
.json-document {
|
||||
padding: 1em 2em;
|
||||
}
|
||||
|
||||
/* Syntax highlighting for JSON objects */
|
||||
ul.json-dict, ol.json-array {
|
||||
list-style-type: none;
|
||||
margin: 0 0 0 1px;
|
||||
border-left: 1px dotted #ccc;
|
||||
padding-left: 2em;
|
||||
}
|
||||
.json-string {
|
||||
color: #0B7500;
|
||||
}
|
||||
.json-literal {
|
||||
color: #1A01CC;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* Toggle button */
|
||||
a.json-toggle {
|
||||
position: relative;
|
||||
color: inherit;
|
||||
text-decoration: none;
|
||||
}
|
||||
a.json-toggle:focus {
|
||||
outline: none;
|
||||
}
|
||||
a.json-toggle:before {
|
||||
font-size: 1.1em;
|
||||
color: #c0c0c0;
|
||||
content: "\25BC"; /* down arrow */
|
||||
position: absolute;
|
||||
display: inline-block;
|
||||
width: 1em;
|
||||
text-align: center;
|
||||
line-height: 1em;
|
||||
left: -1.2em;
|
||||
}
|
||||
a.json-toggle:hover:before {
|
||||
color: #aaa;
|
||||
}
|
||||
a.json-toggle.collapsed:before {
|
||||
/* Use rotated down arrow, prevents right arrow appearing smaller than down arrow in some browsers */
|
||||
transform: rotate(-90deg);
|
||||
}
|
||||
|
||||
/* Collapsable placeholder links */
|
||||
a.json-placeholder {
|
||||
color: #aaa;
|
||||
padding: 0 1em;
|
||||
text-decoration: none;
|
||||
}
|
||||
a.json-placeholder:hover {
|
||||
text-decoration: underline;
|
||||
}
|
158
aprsd/web/chat/static/json-viewer/jquery.json-viewer.js
Normal file
158
aprsd/web/chat/static/json-viewer/jquery.json-viewer.js
Normal file
@ -0,0 +1,158 @@
|
||||
/**
|
||||
* jQuery json-viewer
|
||||
* @author: Alexandre Bodelot <alexandre.bodelot@gmail.com>
|
||||
* @link: https://github.com/abodelot/jquery.json-viewer
|
||||
*/
|
||||
(function($) {
|
||||
|
||||
/**
|
||||
* Check if arg is either an array with at least 1 element, or a dict with at least 1 key
|
||||
* @return boolean
|
||||
*/
|
||||
function isCollapsable(arg) {
|
||||
return arg instanceof Object && Object.keys(arg).length > 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a string represents a valid url
|
||||
* @return boolean
|
||||
*/
|
||||
function isUrl(string) {
|
||||
var urlRegexp = /^(https?:\/\/|ftps?:\/\/)?([a-z0-9%-]+\.){1,}([a-z0-9-]+)?(:(\d{1,5}))?(\/([a-z0-9\-._~:/?#[\]@!$&'()*+,;=%]+)?)?$/i;
|
||||
return urlRegexp.test(string);
|
||||
}
|
||||
|
||||
/**
|
||||
* Transform a json object into html representation
|
||||
* @return string
|
||||
*/
|
||||
function json2html(json, options) {
|
||||
var html = '';
|
||||
if (typeof json === 'string') {
|
||||
// Escape tags and quotes
|
||||
json = json
|
||||
.replace(/&/g, '&')
|
||||
.replace(/</g, '<')
|
||||
.replace(/>/g, '>')
|
||||
.replace(/'/g, ''')
|
||||
.replace(/"/g, '"');
|
||||
|
||||
if (options.withLinks && isUrl(json)) {
|
||||
html += '<a href="' + json + '" class="json-string" target="_blank">' + json + '</a>';
|
||||
} else {
|
||||
// Escape double quotes in the rendered non-URL string.
|
||||
json = json.replace(/"/g, '\\"');
|
||||
html += '<span class="json-string">"' + json + '"</span>';
|
||||
}
|
||||
} else if (typeof json === 'number') {
|
||||
html += '<span class="json-literal">' + json + '</span>';
|
||||
} else if (typeof json === 'boolean') {
|
||||
html += '<span class="json-literal">' + json + '</span>';
|
||||
} else if (json === null) {
|
||||
html += '<span class="json-literal">null</span>';
|
||||
} else if (json instanceof Array) {
|
||||
if (json.length > 0) {
|
||||
html += '[<ol class="json-array">';
|
||||
for (var i = 0; i < json.length; ++i) {
|
||||
html += '<li>';
|
||||
// Add toggle button if item is collapsable
|
||||
if (isCollapsable(json[i])) {
|
||||
html += '<a href class="json-toggle"></a>';
|
||||
}
|
||||
html += json2html(json[i], options);
|
||||
// Add comma if item is not last
|
||||
if (i < json.length - 1) {
|
||||
html += ',';
|
||||
}
|
||||
html += '</li>';
|
||||
}
|
||||
html += '</ol>]';
|
||||
} else {
|
||||
html += '[]';
|
||||
}
|
||||
} else if (typeof json === 'object') {
|
||||
var keyCount = Object.keys(json).length;
|
||||
if (keyCount > 0) {
|
||||
html += '{<ul class="json-dict">';
|
||||
for (var key in json) {
|
||||
if (Object.prototype.hasOwnProperty.call(json, key)) {
|
||||
html += '<li>';
|
||||
var keyRepr = options.withQuotes ?
|
||||
'<span class="json-string">"' + key + '"</span>' : key;
|
||||
// Add toggle button if item is collapsable
|
||||
if (isCollapsable(json[key])) {
|
||||
html += '<a href class="json-toggle">' + keyRepr + '</a>';
|
||||
} else {
|
||||
html += keyRepr;
|
||||
}
|
||||
html += ': ' + json2html(json[key], options);
|
||||
// Add comma if item is not last
|
||||
if (--keyCount > 0) {
|
||||
html += ',';
|
||||
}
|
||||
html += '</li>';
|
||||
}
|
||||
}
|
||||
html += '</ul>}';
|
||||
} else {
|
||||
html += '{}';
|
||||
}
|
||||
}
|
||||
return html;
|
||||
}
|
||||
|
||||
/**
|
||||
* jQuery plugin method
|
||||
* @param json: a javascript object
|
||||
* @param options: an optional options hash
|
||||
*/
|
||||
$.fn.jsonViewer = function(json, options) {
|
||||
// Merge user options with default options
|
||||
options = Object.assign({}, {
|
||||
collapsed: false,
|
||||
rootCollapsable: true,
|
||||
withQuotes: false,
|
||||
withLinks: true
|
||||
}, options);
|
||||
|
||||
// jQuery chaining
|
||||
return this.each(function() {
|
||||
|
||||
// Transform to HTML
|
||||
var html = json2html(json, options);
|
||||
if (options.rootCollapsable && isCollapsable(json)) {
|
||||
html = '<a href class="json-toggle"></a>' + html;
|
||||
}
|
||||
|
||||
// Insert HTML in target DOM element
|
||||
$(this).html(html);
|
||||
$(this).addClass('json-document');
|
||||
|
||||
// Bind click on toggle buttons
|
||||
$(this).off('click');
|
||||
$(this).on('click', 'a.json-toggle', function() {
|
||||
var target = $(this).toggleClass('collapsed').siblings('ul.json-dict, ol.json-array');
|
||||
target.toggle();
|
||||
if (target.is(':visible')) {
|
||||
target.siblings('.json-placeholder').remove();
|
||||
} else {
|
||||
var count = target.children('li').length;
|
||||
var placeholder = count + (count > 1 ? ' items' : ' item');
|
||||
target.after('<a href class="json-placeholder">' + placeholder + '</a>');
|
||||
}
|
||||
return false;
|
||||
});
|
||||
|
||||
// Simulate click on toggle button when placeholder is clicked
|
||||
$(this).on('click', 'a.json-placeholder', function() {
|
||||
$(this).siblings('a.json-toggle').click();
|
||||
return false;
|
||||
});
|
||||
|
||||
if (options.collapsed == true) {
|
||||
// Trigger click to collapse all nodes
|
||||
$(this).find('a.json-toggle').click();
|
||||
}
|
||||
});
|
||||
};
|
||||
})(jQuery);
|
@ -103,7 +103,6 @@
|
||||
<option value="WIDE1-1">WIDE1-1</option>
|
||||
<option value="WIDE1-1,WIDE2-1">WIDE1-1,WIDE2-1</option>
|
||||
<option value="ARISS">ARISS</option>
|
||||
<option value="GATE">GATE</option>
|
||||
</select>
|
||||
</div>
|
||||
<div class="col-sm-3">
|
||||
|
200
aprsd/wsgi.py
200
aprsd/wsgi.py
@ -3,11 +3,10 @@ import importlib.metadata as imp
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import queue
|
||||
import time
|
||||
|
||||
import flask
|
||||
from flask import Flask, request
|
||||
from flask import Flask
|
||||
from flask_httpauth import HTTPBasicAuth
|
||||
from oslo_config import cfg, generator
|
||||
import socketio
|
||||
@ -16,22 +15,14 @@ from werkzeug.security import check_password_hash
|
||||
import aprsd
|
||||
from aprsd import cli_helper, client, conf, packets, plugin, threads
|
||||
from aprsd.log import log
|
||||
from aprsd.threads import stats as stats_threads
|
||||
from aprsd.utils import json as aprsd_json
|
||||
from aprsd.rpc import client as aprsd_rpc_client
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = logging.getLogger("gunicorn.access")
|
||||
logging_queue = queue.Queue()
|
||||
|
||||
|
||||
# ADMIN_COMMAND True means we are running from `aprsd admin`
|
||||
# the `aprsd admin` command will import this file after setting
|
||||
# the APRSD_ADMIN_COMMAND environment variable.
|
||||
ADMIN_COMMAND = os.environ.get("APRSD_ADMIN_COMMAND", False)
|
||||
|
||||
auth = HTTPBasicAuth()
|
||||
users: dict[str, str] = {}
|
||||
users = {}
|
||||
app = Flask(
|
||||
"aprsd",
|
||||
static_url_path="/static",
|
||||
@ -54,40 +45,114 @@ def verify_password(username, password):
|
||||
|
||||
|
||||
def _stats():
|
||||
stats_obj = stats_threads.StatsStore()
|
||||
stats_obj.load()
|
||||
track = aprsd_rpc_client.RPCClient().get_packet_track()
|
||||
now = datetime.datetime.now()
|
||||
|
||||
time_format = "%m-%d-%Y %H:%M:%S"
|
||||
stats = {
|
||||
"time": now.strftime(time_format),
|
||||
"stats": stats_obj.data,
|
||||
|
||||
stats_dict = aprsd_rpc_client.RPCClient().get_stats_dict()
|
||||
if not stats_dict:
|
||||
stats_dict = {
|
||||
"aprsd": {},
|
||||
"aprs-is": {"server": ""},
|
||||
"messages": {
|
||||
"sent": 0,
|
||||
"received": 0,
|
||||
},
|
||||
"email": {
|
||||
"sent": 0,
|
||||
"received": 0,
|
||||
},
|
||||
"seen_list": {
|
||||
"sent": 0,
|
||||
"received": 0,
|
||||
},
|
||||
}
|
||||
|
||||
# Convert the watch_list entries to age
|
||||
wl = aprsd_rpc_client.RPCClient().get_watch_list()
|
||||
new_list = {}
|
||||
if wl:
|
||||
for call in wl.get_all():
|
||||
# call_date = datetime.datetime.strptime(
|
||||
# str(wl.last_seen(call)),
|
||||
# "%Y-%m-%d %H:%M:%S.%f",
|
||||
# )
|
||||
|
||||
# We have to convert the RingBuffer to a real list
|
||||
# so that json.dumps works.
|
||||
# pkts = []
|
||||
# for pkt in wl.get(call)["packets"].get():
|
||||
# pkts.append(pkt)
|
||||
|
||||
new_list[call] = {
|
||||
"last": wl.age(call),
|
||||
# "packets": pkts
|
||||
}
|
||||
|
||||
stats_dict["aprsd"]["watch_list"] = new_list
|
||||
packet_list = aprsd_rpc_client.RPCClient().get_packet_list()
|
||||
rx = tx = 0
|
||||
types = {}
|
||||
if packet_list:
|
||||
rx = packet_list.total_rx()
|
||||
tx = packet_list.total_tx()
|
||||
types_copy = packet_list.types.copy()
|
||||
|
||||
for key in types_copy:
|
||||
types[str(key)] = dict(types_copy[key])
|
||||
|
||||
stats_dict["packets"] = {
|
||||
"sent": tx,
|
||||
"received": rx,
|
||||
"types": types,
|
||||
}
|
||||
return stats
|
||||
if track:
|
||||
size_tracker = len(track)
|
||||
else:
|
||||
size_tracker = 0
|
||||
|
||||
result = {
|
||||
"time": now.strftime(time_format),
|
||||
"size_tracker": size_tracker,
|
||||
"stats": stats_dict,
|
||||
}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@app.route("/stats")
|
||||
def stats():
|
||||
LOG.debug("/stats called")
|
||||
return json.dumps(_stats(), cls=aprsd_json.SimpleJSONEncoder)
|
||||
return json.dumps(_stats())
|
||||
|
||||
|
||||
@app.route("/")
|
||||
def index():
|
||||
stats = _stats()
|
||||
wl = aprsd_rpc_client.RPCClient().get_watch_list()
|
||||
if wl and wl.is_enabled():
|
||||
watch_count = len(wl)
|
||||
watch_age = wl.max_delta()
|
||||
else:
|
||||
watch_count = 0
|
||||
watch_age = 0
|
||||
|
||||
sl = aprsd_rpc_client.RPCClient().get_seen_list()
|
||||
if sl:
|
||||
seen_count = len(sl)
|
||||
else:
|
||||
seen_count = 0
|
||||
|
||||
pm = plugin.PluginManager()
|
||||
plugins = pm.get_plugins()
|
||||
plugin_count = len(plugins)
|
||||
client_stats = stats["stats"].get("APRSClientStats", {})
|
||||
|
||||
if CONF.aprs_network.enabled:
|
||||
transport = "aprs-is"
|
||||
if client_stats:
|
||||
aprs_connection = client_stats.get("server_string", "")
|
||||
else:
|
||||
aprs_connection = "APRS-IS"
|
||||
aprs_connection = (
|
||||
"APRS-IS Server: <a href='http://status.aprs2.net' >"
|
||||
"{}</a>".format(aprs_connection)
|
||||
"{}</a>".format(stats["stats"]["aprs-is"]["server"])
|
||||
)
|
||||
else:
|
||||
# We might be connected to a KISS socket?
|
||||
@ -108,20 +173,13 @@ def index():
|
||||
)
|
||||
)
|
||||
|
||||
if client_stats:
|
||||
stats["stats"]["APRSClientStats"]["transport"] = transport
|
||||
stats["stats"]["APRSClientStats"]["aprs_connection"] = aprs_connection
|
||||
stats["transport"] = transport
|
||||
stats["aprs_connection"] = aprs_connection
|
||||
entries = conf.conf_to_dict()
|
||||
|
||||
thread_info = stats["stats"].get("APRSDThreadList", {})
|
||||
if thread_info:
|
||||
thread_count = len(thread_info)
|
||||
else:
|
||||
thread_count = "unknown"
|
||||
|
||||
return flask.render_template(
|
||||
"index.html",
|
||||
initial_stats=json.dumps(stats, cls=aprsd_json.SimpleJSONEncoder),
|
||||
initial_stats=stats,
|
||||
aprs_connection=aprs_connection,
|
||||
callsign=CONF.callsign,
|
||||
version=aprsd.__version__,
|
||||
@ -129,8 +187,10 @@ def index():
|
||||
entries, indent=4,
|
||||
sort_keys=True, default=str,
|
||||
),
|
||||
watch_count=watch_count,
|
||||
watch_age=watch_age,
|
||||
seen_count=seen_count,
|
||||
plugin_count=plugin_count,
|
||||
thread_count=thread_count,
|
||||
# oslo_out=generate_oslo()
|
||||
)
|
||||
|
||||
@ -149,10 +209,19 @@ def messages():
|
||||
@auth.login_required
|
||||
@app.route("/packets")
|
||||
def get_packets():
|
||||
stats = _stats()
|
||||
stats_dict = stats["stats"]
|
||||
packets = stats_dict.get("PacketList", {})
|
||||
return json.dumps(packets, cls=aprsd_json.SimpleJSONEncoder)
|
||||
LOG.debug("/packets called")
|
||||
packet_list = aprsd_rpc_client.RPCClient().get_packet_list()
|
||||
if packet_list:
|
||||
tmp_list = []
|
||||
pkts = packet_list.copy()
|
||||
for key in pkts:
|
||||
pkt = packet_list.get(key)
|
||||
if pkt:
|
||||
tmp_list.append(pkt.json)
|
||||
|
||||
return json.dumps(tmp_list)
|
||||
else:
|
||||
return json.dumps([])
|
||||
|
||||
|
||||
@auth.login_required
|
||||
@ -204,34 +273,23 @@ def save():
|
||||
return json.dumps({"messages": "saved"})
|
||||
|
||||
|
||||
@app.route("/log_entries", methods=["POST"])
|
||||
def log_entries():
|
||||
"""The url that the server can call to update the logs."""
|
||||
entries = request.json
|
||||
LOG.info(f"Log entries called {len(entries)}")
|
||||
for entry in entries:
|
||||
logging_queue.put(entry)
|
||||
return json.dumps({"messages": "saved"})
|
||||
|
||||
|
||||
class LogUpdateThread(threads.APRSDThread):
|
||||
|
||||
def __init__(self, logging_queue=None):
|
||||
def __init__(self):
|
||||
super().__init__("LogUpdate")
|
||||
self.logging_queue = logging_queue
|
||||
|
||||
def loop(self):
|
||||
if sio:
|
||||
try:
|
||||
log_entry = self.logging_queue.get(block=True, timeout=1)
|
||||
if log_entry:
|
||||
log_entries = aprsd_rpc_client.RPCClient().get_log_entries()
|
||||
|
||||
if log_entries:
|
||||
LOG.info(f"Sending log entries! {len(log_entries)}")
|
||||
for entry in log_entries:
|
||||
sio.emit(
|
||||
"log_entry",
|
||||
log_entry,
|
||||
"log_entry", entry,
|
||||
namespace="/logs",
|
||||
)
|
||||
except queue.Empty:
|
||||
pass
|
||||
time.sleep(5)
|
||||
return True
|
||||
|
||||
|
||||
@ -239,17 +297,17 @@ class LoggingNamespace(socketio.Namespace):
|
||||
log_thread = None
|
||||
|
||||
def on_connect(self, sid, environ):
|
||||
global sio, logging_queue
|
||||
LOG.info(f"LOG on_connect {sid}")
|
||||
global sio
|
||||
LOG.debug(f"LOG on_connect {sid}")
|
||||
sio.emit(
|
||||
"connected", {"data": "/logs Connected"},
|
||||
namespace="/logs",
|
||||
)
|
||||
self.log_thread = LogUpdateThread(logging_queue=logging_queue)
|
||||
self.log_thread = LogUpdateThread()
|
||||
self.log_thread.start()
|
||||
|
||||
def on_disconnect(self, sid):
|
||||
LOG.info(f"LOG Disconnected {sid}")
|
||||
LOG.debug(f"LOG Disconnected {sid}")
|
||||
if self.log_thread:
|
||||
self.log_thread.stop()
|
||||
|
||||
@ -274,8 +332,8 @@ if __name__ == "__main__":
|
||||
async_mode = "threading"
|
||||
sio = socketio.Server(logger=True, async_mode=async_mode)
|
||||
app.wsgi_app = socketio.WSGIApp(sio, app.wsgi_app)
|
||||
log_level = init_app()
|
||||
log.setup_logging(log_level)
|
||||
log_level = init_app(log_level="DEBUG")
|
||||
log.setup_logging(app, log_level)
|
||||
sio.register_namespace(LoggingNamespace("/logs"))
|
||||
CONF.log_opt_values(LOG, logging.DEBUG)
|
||||
app.run(
|
||||
@ -294,17 +352,17 @@ if __name__ == "uwsgi_file_aprsd_wsgi":
|
||||
sio = socketio.Server(logger=True, async_mode=async_mode)
|
||||
app.wsgi_app = socketio.WSGIApp(sio, app.wsgi_app)
|
||||
log_level = init_app(
|
||||
# log_level="DEBUG",
|
||||
log_level="DEBUG",
|
||||
config_file="/config/aprsd.conf",
|
||||
# Commented out for local development.
|
||||
# config_file=cli_helper.DEFAULT_CONFIG_FILE
|
||||
)
|
||||
log.setup_logging(log_level)
|
||||
log.setup_logging(app, log_level)
|
||||
sio.register_namespace(LoggingNamespace("/logs"))
|
||||
CONF.log_opt_values(LOG, logging.DEBUG)
|
||||
|
||||
|
||||
if __name__ == "aprsd.wsgi" and not ADMIN_COMMAND:
|
||||
if __name__ == "aprsd.wsgi":
|
||||
# set async_mode to 'threading', 'eventlet', 'gevent' or 'gevent_uwsgi' to
|
||||
# force a mode else, the best mode is selected automatically from what's
|
||||
# installed
|
||||
@ -313,10 +371,10 @@ if __name__ == "aprsd.wsgi" and not ADMIN_COMMAND:
|
||||
app.wsgi_app = socketio.WSGIApp(sio, app.wsgi_app)
|
||||
|
||||
log_level = init_app(
|
||||
# log_level="DEBUG",
|
||||
log_level="DEBUG",
|
||||
config_file="/config/aprsd.conf",
|
||||
# config_file=cli_helper.DEFAULT_CONFIG_FILE,
|
||||
)
|
||||
log.setup_logging(log_level)
|
||||
log.setup_logging(app, log_level)
|
||||
sio.register_namespace(LoggingNamespace("/logs"))
|
||||
CONF.log_opt_values(LOG, logging.DEBUG)
|
||||
|
@ -1,23 +1,16 @@
|
||||
build
|
||||
check-manifest
|
||||
flake8
|
||||
gray
|
||||
isort
|
||||
mypy
|
||||
pep8-naming
|
||||
pytest
|
||||
pytest-cov
|
||||
pip
|
||||
pip-tools
|
||||
pre-commit
|
||||
Sphinx
|
||||
tox
|
||||
wheel
|
||||
|
||||
# Twine is used for uploading packages to pypi
|
||||
# but it induces an install of cryptography
|
||||
# This is sucky for rpi systems.
|
||||
# twine
|
||||
|
||||
# m2r is for converting .md files to .rst for the docs
|
||||
m2r
|
||||
pre-commit
|
||||
pytest
|
||||
pytest-cov
|
||||
gray
|
||||
pip
|
||||
pip-tools
|
84
dev-requirements.txt
Normal file
84
dev-requirements.txt
Normal file
@ -0,0 +1,84 @@
|
||||
#
|
||||
# This file is autogenerated by pip-compile with Python 3.10
|
||||
# by the following command:
|
||||
#
|
||||
# pip-compile --annotation-style=line dev-requirements.in
|
||||
#
|
||||
add-trailing-comma==3.1.0 # via gray
|
||||
alabaster==0.7.16 # via sphinx
|
||||
autoflake==1.5.3 # via gray
|
||||
babel==2.14.0 # via sphinx
|
||||
black==24.2.0 # via gray
|
||||
build==1.1.1 # via pip-tools
|
||||
cachetools==5.3.3 # via tox
|
||||
certifi==2024.2.2 # via requests
|
||||
cfgv==3.4.0 # via pre-commit
|
||||
chardet==5.2.0 # via tox
|
||||
charset-normalizer==3.3.2 # via requests
|
||||
click==8.1.7 # via black, fixit, moreorless, pip-tools
|
||||
colorama==0.4.6 # via tox
|
||||
commonmark==0.9.1 # via rich
|
||||
configargparse==1.7 # via gray
|
||||
coverage[toml]==7.4.3 # via pytest-cov
|
||||
distlib==0.3.8 # via virtualenv
|
||||
docutils==0.20.1 # via sphinx
|
||||
exceptiongroup==1.2.0 # via pytest
|
||||
filelock==3.13.1 # via tox, virtualenv
|
||||
fixit==2.1.0 # via gray
|
||||
flake8==7.0.0 # via -r dev-requirements.in, pep8-naming
|
||||
gray==0.14.0 # via -r dev-requirements.in
|
||||
identify==2.5.35 # via pre-commit
|
||||
idna==3.6 # via requests
|
||||
imagesize==1.4.1 # via sphinx
|
||||
iniconfig==2.0.0 # via pytest
|
||||
isort==5.13.2 # via -r dev-requirements.in, gray
|
||||
jinja2==3.1.3 # via sphinx
|
||||
libcst==1.2.0 # via fixit
|
||||
markupsafe==2.1.5 # via jinja2
|
||||
mccabe==0.7.0 # via flake8
|
||||
moreorless==0.4.0 # via fixit
|
||||
mypy==1.8.0 # via -r dev-requirements.in
|
||||
mypy-extensions==1.0.0 # via black, mypy, typing-inspect
|
||||
nodeenv==1.8.0 # via pre-commit
|
||||
packaging==23.2 # via black, build, fixit, pyproject-api, pytest, sphinx, tox
|
||||
pathspec==0.12.1 # via black, trailrunner
|
||||
pep8-naming==0.13.3 # via -r dev-requirements.in
|
||||
pip-tools==7.4.1 # via -r dev-requirements.in
|
||||
platformdirs==4.2.0 # via black, tox, virtualenv
|
||||
pluggy==1.4.0 # via pytest, tox
|
||||
pre-commit==3.6.2 # via -r dev-requirements.in
|
||||
pycodestyle==2.11.1 # via flake8
|
||||
pyflakes==3.2.0 # via autoflake, flake8
|
||||
pygments==2.17.2 # via rich, sphinx
|
||||
pyproject-api==1.6.1 # via tox
|
||||
pyproject-hooks==1.0.0 # via build, pip-tools
|
||||
pytest==8.0.2 # via -r dev-requirements.in, pytest-cov
|
||||
pytest-cov==4.1.0 # via -r dev-requirements.in
|
||||
pyupgrade==3.15.1 # via gray
|
||||
pyyaml==6.0.1 # via libcst, pre-commit
|
||||
requests==2.31.0 # via sphinx
|
||||
rich==12.6.0 # via gray
|
||||
snowballstemmer==2.2.0 # via sphinx
|
||||
sphinx==7.2.6 # via -r dev-requirements.in
|
||||
sphinxcontrib-applehelp==1.0.8 # via sphinx
|
||||
sphinxcontrib-devhelp==1.0.6 # via sphinx
|
||||
sphinxcontrib-htmlhelp==2.0.5 # via sphinx
|
||||
sphinxcontrib-jsmath==1.0.1 # via sphinx
|
||||
sphinxcontrib-qthelp==1.0.7 # via sphinx
|
||||
sphinxcontrib-serializinghtml==1.1.10 # via sphinx
|
||||
tokenize-rt==5.2.0 # via add-trailing-comma, pyupgrade
|
||||
toml==0.10.2 # via autoflake
|
||||
tomli==2.0.1 # via black, build, coverage, fixit, mypy, pip-tools, pyproject-api, pyproject-hooks, pytest, tox
|
||||
tox==4.14.0 # via -r dev-requirements.in
|
||||
trailrunner==1.4.0 # via fixit
|
||||
typing-extensions==4.10.0 # via black, libcst, mypy, typing-inspect
|
||||
typing-inspect==0.9.0 # via libcst
|
||||
unify==0.5 # via gray
|
||||
untokenize==0.1.1 # via unify
|
||||
urllib3==2.2.1 # via requests
|
||||
virtualenv==20.25.1 # via pre-commit, tox
|
||||
wheel==0.42.0 # via pip-tools
|
||||
|
||||
# The following packages are considered to be unsafe in a requirements file:
|
||||
# pip
|
||||
# setuptools
|
@ -1,18 +1,10 @@
|
||||
FROM python:3.11-slim AS build
|
||||
FROM python:3.11-slim as build
|
||||
|
||||
ARG VERSION=3.4.0
|
||||
# pass this in as 'dev' if you want to install from github repo vs pypi
|
||||
ARG INSTALL_TYPE=pypi
|
||||
|
||||
ARG BRANCH=master
|
||||
ARG BUILDX_QEMU_ENV
|
||||
|
||||
ENV APRSD_BRANCH=${BRANCH:-master}
|
||||
ARG VERSION=3.1.0
|
||||
ENV TZ=${TZ:-US/Eastern}
|
||||
ENV LC_ALL=C.UTF-8
|
||||
ENV LANG=C.UTF-8
|
||||
ENV APRSD_PIP_VERSION=${VERSION}
|
||||
ENV PATH="${PATH}:/app/.local/bin"
|
||||
|
||||
ENV PIP_DEFAULT_TIMEOUT=100 \
|
||||
# Allow statements and log messages to immediately appear
|
||||
@ -27,7 +19,6 @@ RUN set -ex \
|
||||
# Create a non-root user
|
||||
&& addgroup --system --gid 1001 appgroup \
|
||||
&& useradd --uid 1001 --gid 1001 -s /usr/bin/bash -m -d /app appuser \
|
||||
&& usermod -aG sudo appuser \
|
||||
# Upgrade the package index and install security upgrades
|
||||
&& apt-get update \
|
||||
&& apt-get upgrade -y \
|
||||
@ -40,38 +31,29 @@ RUN set -ex \
|
||||
|
||||
|
||||
### Final stage
|
||||
FROM build AS install
|
||||
FROM build as final
|
||||
WORKDIR /app
|
||||
|
||||
RUN pip3 install -U pip
|
||||
RUN pip3 install aprsd==$APRSD_PIP_VERSION
|
||||
RUN pip install gevent uwsgi
|
||||
RUN which aprsd
|
||||
RUN mkdir /config
|
||||
RUN chown -R appuser:appgroup /app
|
||||
RUN chown -R appuser:appgroup /config
|
||||
USER appuser
|
||||
RUN if [ "$INSTALL_TYPE" = "pypi" ]; then \
|
||||
pip3 install aprsd==$APRSD_PIP_VERSION; \
|
||||
elif [ "$INSTALL_TYPE" = "github" ]; then \
|
||||
git clone -b $APRSD_BRANCH https://github.com/craigerl/aprsd; \
|
||||
cd /app/aprsd && pip install .; \
|
||||
ls -al /app/.local/lib/python3.11/site-packages/aprsd*; \
|
||||
fi
|
||||
RUN pip install gevent uwsgi
|
||||
RUN echo "PATH=\$PATH:/usr/games:/app/.local/bin" >> /app/.bashrc
|
||||
RUN echo "PATH=\$PATH:/usr/games" >> /app/.bashrc
|
||||
RUN which aprsd
|
||||
RUN aprsd sample-config > /config/aprsd.conf
|
||||
RUN aprsd --version
|
||||
|
||||
ADD bin/setup.sh /app
|
||||
ADD bin/run.sh /app
|
||||
ADD bin/listen.sh /app
|
||||
ADD bin/admin.sh /app
|
||||
|
||||
|
||||
FROM install AS final
|
||||
# For the web admin interface
|
||||
EXPOSE 8001
|
||||
|
||||
ENTRYPOINT ["/app/run.sh"]
|
||||
VOLUME ["/config"]
|
||||
ENTRYPOINT ["/app/setup.sh"]
|
||||
CMD ["server"]
|
||||
|
||||
# Set the user to run the application
|
||||
USER appuser
|
||||
|
58
docker/Dockerfile-dev
Normal file
58
docker/Dockerfile-dev
Normal file
@ -0,0 +1,58 @@
|
||||
FROM python:3.11-slim as build
|
||||
|
||||
ARG BRANCH=master
|
||||
ARG BUILDX_QEMU_ENV
|
||||
ENV APRSD_BRANCH=${BRANCH:-master}
|
||||
|
||||
ENV PIP_DEFAULT_TIMEOUT=100 \
|
||||
# Allow statements and log messages to immediately appear
|
||||
PYTHONUNBUFFERED=1 \
|
||||
# disable a pip version check to reduce run-time & log-spam
|
||||
PIP_DISABLE_PIP_VERSION_CHECK=1 \
|
||||
# cache is useless in docker image, so disable to reduce image size
|
||||
PIP_NO_CACHE_DIR=1
|
||||
|
||||
|
||||
RUN set -ex \
|
||||
# Create a non-root user
|
||||
&& addgroup --system --gid 1001 appgroup \
|
||||
&& useradd --uid 1001 --gid 1001 -s /usr/bin/bash -m -d /app appuser \
|
||||
# Upgrade the package index and install security upgrades
|
||||
&& apt-get update \
|
||||
&& apt-get upgrade -y \
|
||||
&& apt-get install -y git build-essential curl libffi-dev fortune \
|
||||
python3-dev libssl-dev libxml2-dev libxslt-dev telnet sudo \
|
||||
# Install dependencies
|
||||
# Clean up
|
||||
&& apt-get autoremove -y \
|
||||
&& apt-get clean -y
|
||||
|
||||
|
||||
### Final stage
|
||||
FROM build as final
|
||||
WORKDIR /app
|
||||
|
||||
RUN git clone -b $APRSD_BRANCH https://github.com/craigerl/aprsd
|
||||
RUN cd aprsd && pip install --no-cache-dir .
|
||||
RUN pip install gevent uwsgi
|
||||
RUN which aprsd
|
||||
RUN mkdir /config
|
||||
RUN chown -R appuser:appgroup /app
|
||||
RUN chown -R appuser:appgroup /config
|
||||
USER appuser
|
||||
RUN echo "PATH=\$PATH:/usr/games" >> /app/.bashrc
|
||||
RUN which aprsd
|
||||
RUN aprsd sample-config > /config/aprsd.conf
|
||||
|
||||
ADD bin/run.sh /app
|
||||
ADD bin/listen.sh /app
|
||||
ADD bin/admin.sh /app
|
||||
|
||||
EXPOSE 8000
|
||||
|
||||
# CMD ["gunicorn", "aprsd.wsgi:app", "--host", "0.0.0.0", "--port", "8000"]
|
||||
ENTRYPOINT ["/app/run.sh"]
|
||||
VOLUME ["/config"]
|
||||
|
||||
# Set the user to run the application
|
||||
USER appuser
|
@ -1,50 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -x
|
||||
|
||||
# The default command
|
||||
# Override the command in docker-compose.yml to change
|
||||
# what command you want to run in the container
|
||||
COMMAND="server"
|
||||
|
||||
if [ ! -z "${@+x}" ]; then
|
||||
COMMAND=$@
|
||||
fi
|
||||
|
||||
if [ ! -z "${APRSD_PLUGINS}" ]; then
|
||||
OLDIFS=$IFS
|
||||
IFS=','
|
||||
echo "Installing pypi plugins '$APRSD_PLUGINS'";
|
||||
for plugin in ${APRSD_PLUGINS}; do
|
||||
IFS=$OLDIFS
|
||||
# call your procedure/other scripts here below
|
||||
echo "Installing '$plugin'"
|
||||
pip3 install --user $plugin
|
||||
done
|
||||
fi
|
||||
|
||||
if [ ! -z "${APRSD_EXTENSIONS}" ]; then
|
||||
OLDIFS=$IFS
|
||||
IFS=','
|
||||
echo "Installing APRSD extensions from pypi '$APRSD_EXTENSIONS'";
|
||||
for extension in ${APRSD_EXTENSIONS}; do
|
||||
IFS=$OLDIFS
|
||||
# call your procedure/other scripts here below
|
||||
echo "Installing '$extension'"
|
||||
pip3 install --user $extension
|
||||
done
|
||||
fi
|
||||
|
||||
if [ -z "${LOG_LEVEL}" ] || [[ ! "${LOG_LEVEL}" =~ ^(CRITICAL|ERROR|WARNING|INFO)$ ]]; then
|
||||
LOG_LEVEL="DEBUG"
|
||||
fi
|
||||
|
||||
echo "Log level is set to ${LOG_LEVEL}";
|
||||
|
||||
# check to see if there is a config file
|
||||
APRSD_CONFIG="/config/aprsd.conf"
|
||||
if [ ! -e "$APRSD_CONFIG" ]; then
|
||||
echo "'$APRSD_CONFIG' File does not exist. Creating."
|
||||
aprsd sample-config > $APRSD_CONFIG
|
||||
fi
|
||||
|
||||
aprsd ${COMMAND} --config ${APRSD_CONFIG} --loglevel ${LOG_LEVEL}
|
@ -26,7 +26,7 @@ DEV=0
|
||||
REBUILD_BUILDX=0
|
||||
TAG="latest"
|
||||
BRANCH=${BRANCH:-master}
|
||||
VERSION="3.3.4"
|
||||
VERSION="3.0.0"
|
||||
|
||||
while getopts “hdart:b:v:” OPTION
|
||||
do
|
||||
@ -90,8 +90,7 @@ then
|
||||
# Use this script to locally build the docker image
|
||||
docker buildx build --push --platform $PLATFORMS \
|
||||
-t hemna6969/aprsd:$TAG \
|
||||
--build-arg INSTALL_TYPE=github \
|
||||
--build-arg branch=$BRANCH \
|
||||
-f Dockerfile-dev --build-arg branch=$BRANCH \
|
||||
--build-arg BUILDX_QEMU_ENV=true \
|
||||
--no-cache .
|
||||
else
|
||||
@ -102,5 +101,6 @@ else
|
||||
--build-arg BUILDX_QEMU_ENV=true \
|
||||
-t hemna6969/aprsd:$VERSION \
|
||||
-t hemna6969/aprsd:$TAG \
|
||||
-t hemna6969/aprsd:latest .
|
||||
-t hemna6969/aprsd:latest \
|
||||
-f Dockerfile .
|
||||
fi
|
||||
|
@ -1,37 +0,0 @@
|
||||
aprsd.client.drivers package
|
||||
============================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
aprsd.client.drivers.aprsis module
|
||||
----------------------------------
|
||||
|
||||
.. automodule:: aprsd.client.drivers.aprsis
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.client.drivers.fake module
|
||||
--------------------------------
|
||||
|
||||
.. automodule:: aprsd.client.drivers.fake
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.client.drivers.kiss module
|
||||
--------------------------------
|
||||
|
||||
.. automodule:: aprsd.client.drivers.kiss
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: aprsd.client.drivers
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
@ -1,69 +0,0 @@
|
||||
aprsd.client package
|
||||
====================
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 4
|
||||
|
||||
aprsd.client.drivers
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
aprsd.client.aprsis module
|
||||
--------------------------
|
||||
|
||||
.. automodule:: aprsd.client.aprsis
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.client.base module
|
||||
------------------------
|
||||
|
||||
.. automodule:: aprsd.client.base
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.client.factory module
|
||||
---------------------------
|
||||
|
||||
.. automodule:: aprsd.client.factory
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.client.fake module
|
||||
------------------------
|
||||
|
||||
.. automodule:: aprsd.client.fake
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.client.kiss module
|
||||
------------------------
|
||||
|
||||
.. automodule:: aprsd.client.kiss
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.client.stats module
|
||||
-------------------------
|
||||
|
||||
.. automodule:: aprsd.client.stats
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: aprsd.client
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
29
docs/apidoc/aprsd.clients.rst
Normal file
29
docs/apidoc/aprsd.clients.rst
Normal file
@ -0,0 +1,29 @@
|
||||
aprsd.clients package
|
||||
=====================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
aprsd.clients.aprsis module
|
||||
---------------------------
|
||||
|
||||
.. automodule:: aprsd.clients.aprsis
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.clients.kiss module
|
||||
-------------------------
|
||||
|
||||
.. automodule:: aprsd.clients.kiss
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: aprsd.clients
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
@ -1,21 +0,0 @@
|
||||
aprsd.log package
|
||||
=================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
aprsd.log.log module
|
||||
--------------------
|
||||
|
||||
.. automodule:: aprsd.log.log
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: aprsd.log
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
@ -4,14 +4,6 @@ aprsd.packets package
|
||||
Submodules
|
||||
----------
|
||||
|
||||
aprsd.packets.collector module
|
||||
------------------------------
|
||||
|
||||
.. automodule:: aprsd.packets.collector
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.packets.core module
|
||||
-------------------------
|
||||
|
||||
@ -20,14 +12,6 @@ aprsd.packets.core module
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.packets.log module
|
||||
------------------------
|
||||
|
||||
.. automodule:: aprsd.packets.log
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.packets.packet\_list module
|
||||
---------------------------------
|
||||
|
||||
|
@ -44,6 +44,14 @@ aprsd.plugins.ping module
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.plugins.query module
|
||||
--------------------------
|
||||
|
||||
.. automodule:: aprsd.plugins.query
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.plugins.time module
|
||||
-------------------------
|
||||
|
||||
|
29
docs/apidoc/aprsd.rpc.rst
Normal file
29
docs/apidoc/aprsd.rpc.rst
Normal file
@ -0,0 +1,29 @@
|
||||
aprsd.rpc package
|
||||
=================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
aprsd.rpc.client module
|
||||
-----------------------
|
||||
|
||||
.. automodule:: aprsd.rpc.client
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.rpc.server module
|
||||
-----------------------
|
||||
|
||||
.. automodule:: aprsd.rpc.server
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: aprsd.rpc
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
@ -7,13 +7,13 @@ Subpackages
|
||||
.. toctree::
|
||||
:maxdepth: 4
|
||||
|
||||
aprsd.client
|
||||
aprsd.clients
|
||||
aprsd.cmds
|
||||
aprsd.conf
|
||||
aprsd.log
|
||||
aprsd.packets
|
||||
aprsd.plugins
|
||||
aprsd.stats
|
||||
aprsd.rpc
|
||||
aprsd.threads
|
||||
aprsd.utils
|
||||
aprsd.web
|
||||
@ -29,6 +29,14 @@ aprsd.cli\_helper module
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.client module
|
||||
-------------------
|
||||
|
||||
.. automodule:: aprsd.client
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.exception module
|
||||
----------------------
|
||||
|
||||
@ -69,6 +77,14 @@ aprsd.plugin\_utils module
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.stats module
|
||||
------------------
|
||||
|
||||
.. automodule:: aprsd.stats
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.wsgi module
|
||||
-----------------
|
||||
|
||||
|
@ -1,29 +0,0 @@
|
||||
aprsd.stats package
|
||||
===================
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
aprsd.stats.app module
|
||||
----------------------
|
||||
|
||||
.. automodule:: aprsd.stats.app
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
aprsd.stats.collector module
|
||||
----------------------------
|
||||
|
||||
.. automodule:: aprsd.stats.collector
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Module contents
|
||||
---------------
|
||||
|
||||
.. automodule:: aprsd.stats
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user