You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

611 lines
22 KiB

Change core system to improve performance and facilitate multi TV info sources. Change migrate core objects TVShow and TVEpisode and everywhere that these objects affect. Add message to logs and disable ui backlog buttons when no media provider has active and/or scheduled searching enabled. Change views for py3 compat. Change set default runtime of 5 mins if none is given for layout Day by Day. Add OpenSubtitles authentication support to config/Subtitles/Subtitles Plugin. Add "Enforce media hash match" to config/Subtitles Plugin/Opensubtitles for accurate subs if enabled, but if disabled, search failures will fallback to use less reliable subtitle results. Add Apprise 0.8.0 (6aa52c3). Add hachoir_py3 3.0a6 (5b9e05a). Add sgmllib3k 1.0.0 Update soupsieve 1.9.1 (24859cc) to soupsieve_py2 1.9.5 (6a38398) Add soupsieve_py3 2.0.0.dev (69194a2). Add Tornado_py3 Web Server 6.0.3 (ff985fe). Add xmlrpclib_to 0.1.1 (c37db9e). Remove ancient Growl lib 0.1 Remove xmltodict library. Change requirements.txt for Cheetah3 to minimum 3.2.4 Change update sabToSickBeard. Change update autoProcessTV. Change remove Twitter notifier. Update NZBGet Process Media extension, SickGear-NG 1.7 → 2.4 Update Kodi addon 1.0.3 → 1.0.4 Update ADBA for py3. Update Beautiful Soup 4.8.0 (r526) to 4.8.1 (r531). Update Send2Trash 1.3.0 (a568370) to 1.5.0 (66afce7). Update soupsieve 1.9.1 (24859cc) to 1.9.5 (6a38398). Change use GNTP (Growl Notification Transport Protocol) from Apprise. Change add multi host support to Growl notifier. Fix Growl notifier when using empty password. Change update links for Growl notifications. Change deprecate confg/Notifications/Growl password field as these are now stored with host setting. Fix prevent infinite memoryError from a particular jpg data structure. Change subliminal for py3. Change enzyme for py3. Change browser_ua for py3. Change feedparser for py3 (sgmlib is no longer available on py3 as standardlib so added ext lib) Fix Guessit. Fix parse_xml for py3. Fix name parser with multi eps for py3. Fix tvdb_api fixes for py3 (search show). Fix config/media process to only display "pattern is invalid" qtip on "Episode naming" tab if the associated field is actually visible. Also, if the field becomes hidden due to a setting change, hide any previously displayed qtip. Note for Javascript::getelementbyid (or $('tag[id="<name>"')) is required when an id is being searched in the dom due to ":" used in a shows id name. Change download anidb xml files to main cache folder and use adba lib folder as a last resort. Change create get anidb show groups as centralised helper func and consolidate dupe code. Change move anidb related functions to newly renamed anime.py (from blacklistandwhitelist.py). Change str encode hex no longer exits in py3, use codecs.encode(...) instead. Change fix b64decode on py3 returns bytestrings. Change use binary read when downloading log file via browser to prevent any encoding issues. Change add case insensitive ordering to anime black/whitelist. Fix anime groups list not excluding whitelisted stuff. Change add Windows utf8 fix ... see: ytdl-org/youtube-dl#820 Change if no qualities are wanted, exit manual search thread. Fix keepalive for py3 process media. Change add a once a month update of tvinfo show mappings to the daily updater. Change autocorrect ids of new shows by updating from -8 to 31 days of the airdate of episode one. Add next run time to Manage/Show Tasks/Daily show update. Change when fetching imdb data, if imdb id is an episode id then try to find and use real show id. Change delete diskcache db in imdbpie when value error (due to change in Python version). Change during startup, cleanup any _cleaner.pyc/o to prevent issues when switching python versions. Add .pyc cleaner if python version is switched. Change replace deprecated gettz_db_metadata() and gettz. Change rebrand "SickGear PostProcessing script" to "SickGear Process Media extension". Change improve setup guide to use the NZBGet version to minimise displayed text based on version. Change NZBGet versions prior to v17 now told to upgrade as those version are no longer supported - code has actually exit on start up for some time but docs were outdated. Change comment out code and unused option sg_base_path. Change supported Python version 2.7.9-2.7.18 inclusive expanded to 3.7.1-3.8.1 inclusive. Change pidfile creation under Linux 0o644. Make logger accept lists to output continuously using the log_lock instead of split up by other processes. Fix long path issues with Windows process media.
6 years ago
# -*- coding: utf-8 -*-
#
# Copyright (C) 2019 Chris Caron <lead2gold@gmail.com>
# All rights reserved.
#
# This code is licensed under the MIT License.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files(the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and / or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions :
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import os
import re
import six
# import yaml
from .. import plugins
from ..AppriseAsset import AppriseAsset
from ..URLBase import URLBase
from ..common import ConfigFormat
from ..common import CONFIG_FORMATS
from ..utils import GET_SCHEMA_RE
from ..utils import parse_list
class ConfigBase(URLBase):
"""
This is the base class for all supported configuration sources
"""
# The Default Encoding to use if not otherwise detected
encoding = 'utf-8'
# The default expected configuration format unless otherwise
# detected by the sub-modules
default_config_format = ConfigFormat.TEXT
# This is only set if the user overrides the config format on the URL
# this should always initialize itself as None
config_format = None
# Don't read any more of this amount of data into memory as there is no
# reason we should be reading in more. This is more of a safe guard then
# anything else. 128KB (131072B)
max_buffer_size = 131072
def __init__(self, **kwargs):
"""
Initialize some general logging and common server arguments that will
keep things consistent when working with the configurations that
inherit this class.
"""
super(ConfigBase, self).__init__(**kwargs)
# Tracks previously loaded content for speed
self._cached_servers = None
if 'encoding' in kwargs:
# Store the encoding
self.encoding = kwargs.get('encoding')
if 'format' in kwargs:
# Store the enforced config format
self.config_format = kwargs.get('format').lower()
if self.config_format not in CONFIG_FORMATS:
# Simple error checking
err = 'An invalid config format ({}) was specified.'.format(
self.config_format)
self.logger.warning(err)
raise TypeError(err)
return
def servers(self, asset=None, cache=True, **kwargs):
"""
Performs reads loaded configuration and returns all of the services
that could be parsed and loaded.
"""
if cache is True and isinstance(self._cached_servers, list):
# We already have cached results to return; use them
return self._cached_servers
# Our response object
self._cached_servers = list()
# read() causes the child class to do whatever it takes for the
# config plugin to load the data source and return unparsed content
# None is returned if there was an error or simply no data
content = self.read(**kwargs)
if not isinstance(content, six.string_types):
# Nothing more to do
return list()
# Our Configuration format uses a default if one wasn't one detected
# or enfored.
config_format = \
self.default_config_format \
if self.config_format is None else self.config_format
# Dynamically load our parse_ function based on our config format
fn = getattr(ConfigBase, 'config_parse_{}'.format(config_format))
# Execute our config parse function which always returns a list
self._cached_servers.extend(fn(content=content, asset=asset))
if len(self._cached_servers):
self.logger.info('Loaded {} entries from {}'.format(
len(self._cached_servers), self.url()))
else:
self.logger.warning('Failed to load configuration from {}'.format(
self.url()))
return self._cached_servers
def read(self):
"""
This object should be implimented by the child classes
"""
return None
@staticmethod
def parse_url(url, verify_host=True):
"""Parses the URL and returns it broken apart into a dictionary.
This is very specific and customized for Apprise.
Args:
url (str): The URL you want to fully parse.
verify_host (:obj:`bool`, optional): a flag kept with the parsed
URL which some child classes will later use to verify SSL
keys (if SSL transactions take place). Unless under very
specific circumstances, it is strongly recomended that
you leave this default value set to True.
Returns:
A dictionary is returned containing the URL fully parsed if
successful, otherwise None is returned.
"""
results = URLBase.parse_url(url, verify_host=verify_host)
if not results:
# We're done; we failed to parse our url
return results
# Allow overriding the default config format
if 'format' in results['qsd']:
results['format'] = results['qsd'].get('format')
if results['format'] not in CONFIG_FORMATS:
URLBase.logger.warning(
'Unsupported format specified {}'.format(
results['format']))
del results['format']
# Defines the encoding of the payload
if 'encoding' in results['qsd']:
results['encoding'] = results['qsd'].get('encoding')
return results
@staticmethod
def config_parse_text(content, asset=None):
"""
Parse the specified content as though it were a simple text file only
containing a list of URLs. Return a list of loaded notification plugins
Optionally associate an asset with the notification.
The file syntax is:
#
# pound/hashtag allow for line comments
#
# One or more tags can be idenified using comma's (,) to separate
# them.
<Tag(s)>=<URL>
# Or you can use this format (no tags associated)
<URL>
"""
# For logging, track the line number
line = 0
response = list()
# Define what a valid line should look like
valid_line_re = re.compile(
r'^\s*(?P<line>([;#]+(?P<comment>.*))|'
r'(\s*(?P<tags>[^=]+)=|=)?\s*'
r'(?P<url>[a-z0-9]{2,9}://.*))?$', re.I)
try:
# split our content up to read line by line
content = re.split(r'\r*\n', content)
except TypeError:
# content was not expected string type
ConfigBase.logger.error('Invalid apprise text data specified')
return list()
for entry in content:
# Increment our line count
line += 1
result = valid_line_re.match(entry)
if not result:
# Invalid syntax
ConfigBase.logger.error(
'Invalid apprise text format found '
'{} on line {}.'.format(entry, line))
# Assume this is a file we shouldn't be parsing. It's owner
# can read the error printed to screen and take action
# otherwise.
return list()
if not result.group('url'):
# Comment/empty line; do nothing
continue
# Store our url read in
url = result.group('url')
# swap hash (#) tag values with their html version
_url = url.replace('/#', '/%23')
# Attempt to acquire the schema at the very least to allow our
# plugins to determine if they can make a better
# interpretation of a URL geared for them
schema = GET_SCHEMA_RE.match(_url)
# Ensure our schema is always in lower case
schema = schema.group('schema').lower()
# Some basic validation
if schema not in plugins.SCHEMA_MAP:
ConfigBase.logger.warning(
'Unsupported schema {} on line {}.'.format(
schema, line))
continue
# Parse our url details of the server object as dictionary
# containing all of the information parsed from our URL
results = plugins.SCHEMA_MAP[schema].parse_url(_url)
if results is None:
# Failed to parse the server URL
ConfigBase.logger.warning(
'Unparseable URL {} on line {}.'.format(url, line))
continue
# Build a list of tags to associate with the newly added
# notifications if any were set
results['tag'] = set(parse_list(result.group('tags')))
ConfigBase.logger.trace(
'URL {} unpacked as:{}{}'.format(
url, os.linesep, os.linesep.join(
['{}="{}"'.format(k, v) for k, v in results.items()])))
# Prepare our Asset Object
results['asset'] = \
asset if isinstance(asset, AppriseAsset) else AppriseAsset()
try:
# Attempt to create an instance of our plugin using the
# parsed URL information
plugin = plugins.SCHEMA_MAP[results['schema']](**results)
# Create log entry of loaded URL
ConfigBase.logger.debug('Loaded URL: {}'.format(plugin.url()))
except Exception as e:
# the arguments are invalid or can not be used.
ConfigBase.logger.warning(
'Could not load URL {} on line {}.'.format(
url, line))
ConfigBase.logger.debug('Loading Exception: %s' % str(e))
continue
# if we reach here, we successfully loaded our data
response.append(plugin)
# Return what was loaded
return response
# @staticmethod
# def config_parse_yaml(content, asset=None):
# """
# Parse the specified content as though it were a yaml file
# specifically formatted for apprise. Return a list of loaded
# notification plugins.
#
# Optionally associate an asset with the notification.
#
# """
# response = list()
#
# try:
# # Load our data (safely)
# result = yaml.load(content, Loader=yaml.SafeLoader)
#
# except (AttributeError, yaml.error.MarkedYAMLError) as e:
# # Invalid content
# ConfigBase.logger.error(
# 'Invalid apprise yaml data specified.')
# ConfigBase.logger.debug(
# 'YAML Exception:{}{}'.format(os.linesep, e))
# return list()
#
# if not isinstance(result, dict):
# # Invalid content
# ConfigBase.logger.error('Invalid apprise yaml structure specified')
# return list()
#
# # YAML Version
# version = result.get('version', 1)
# if version != 1:
# # Invalid syntax
# ConfigBase.logger.error(
# 'Invalid apprise yaml version specified {}.'.format(version))
# return list()
#
# #
# # global asset object
# #
# asset = asset if isinstance(asset, AppriseAsset) else AppriseAsset()
# tokens = result.get('asset', None)
# if tokens and isinstance(tokens, dict):
# for k, v in tokens.items():
#
# if k.startswith('_') or k.endswith('_'):
# # Entries are considered reserved if they start or end
# # with an underscore
# ConfigBase.logger.warning(
# 'Ignored asset key "{}".'.format(k))
# continue
#
# if not (hasattr(asset, k) and
# isinstance(getattr(asset, k), six.string_types)):
# # We can't set a function or non-string set value
# ConfigBase.logger.warning(
# 'Invalid asset key "{}".'.format(k))
# continue
#
# if v is None:
# # Convert to an empty string
# v = ''
#
# if not isinstance(v, six.string_types):
# # we must set strings with a string
# ConfigBase.logger.warning(
# 'Invalid asset value to "{}".'.format(k))
# continue
#
# # Set our asset object with the new value
# setattr(asset, k, v.strip())
#
# #
# # global tag root directive
# #
# global_tags = set()
#
# tags = result.get('tag', None)
# if tags and isinstance(tags, (list, tuple, six.string_types)):
# # Store any preset tags
# global_tags = set(parse_list(tags))
#
# #
# # urls root directive
# #
# urls = result.get('urls', None)
# if not isinstance(urls, (list, tuple)):
# # Unsupported
# ConfigBase.logger.error(
# 'Missing "urls" directive in apprise yaml.')
# return list()
#
# # Iterate over each URL
# for no, url in enumerate(urls):
#
# # Our results object is what we use to instantiate our object if
# # we can. Reset it to None on each iteration
# results = list()
#
# if isinstance(url, six.string_types):
# # We're just a simple URL string
#
# # swap hash (#) tag values with their html version
# _url = url.replace('/#', '/%23')
#
# # Attempt to acquire the schema at the very least to allow our
# # plugins to determine if they can make a better
# # interpretation of a URL geared for them
# schema = GET_SCHEMA_RE.match(_url)
# if schema is None:
# ConfigBase.logger.warning(
# 'Unsupported schema in urls entry #{}'.format(no + 1))
# continue
#
# # Ensure our schema is always in lower case
# schema = schema.group('schema').lower()
#
# # Some basic validation
# if schema not in plugins.SCHEMA_MAP:
# ConfigBase.logger.warning(
# 'Unsupported schema {} in urls entry #{}'.format(
# schema, no + 1))
# continue
#
# # Parse our url details of the server object as dictionary
# # containing all of the information parsed from our URL
# _results = plugins.SCHEMA_MAP[schema].parse_url(_url)
# if _results is None:
# ConfigBase.logger.warning(
# 'Unparseable {} based url; entry #{}'.format(
# schema, no + 1))
# continue
#
# # add our results to our global set
# results.append(_results)
#
# elif isinstance(url, dict):
# # We are a url string with additional unescaped options
# if six.PY2:
# _url, tokens = next(url.iteritems())
# else: # six.PY3
# _url, tokens = next(iter(url.items()))
#
# # swap hash (#) tag values with their html version
# _url = _url.replace('/#', '/%23')
#
# # Get our schema
# schema = GET_SCHEMA_RE.match(_url)
# if schema is None:
# ConfigBase.logger.warning(
# 'Unsupported schema in urls entry #{}'.format(no + 1))
# continue
#
# # Ensure our schema is always in lower case
# schema = schema.group('schema').lower()
#
# # Some basic validation
# if schema not in plugins.SCHEMA_MAP:
# ConfigBase.logger.warning(
# 'Unsupported schema {} in urls entry #{}'.format(
# schema, no + 1))
# continue
#
# # Parse our url details of the server object as dictionary
# # containing all of the information parsed from our URL
# _results = plugins.SCHEMA_MAP[schema].parse_url(_url)
# if _results is None:
# # Setup dictionary
# _results = {
# # Minimum requirements
# 'schema': schema,
# }
#
# if tokens is not None:
# # populate and/or override any results populated by
# # parse_url()
# for entries in tokens:
# # Copy ourselves a template of our parsed URL as a base
# # to work with
# r = _results.copy()
#
# # We are a url string with additional unescaped options
# if isinstance(entries, dict):
# if six.PY2:
# _url, tokens = next(url.iteritems())
# else: # six.PY3
# _url, tokens = next(iter(url.items()))
#
# # Tags you just can't over-ride
# if 'schema' in entries:
# del entries['schema']
#
# # Extend our dictionary with our new entries
# r.update(entries)
#
# # add our results to our global set
# results.append(r)
#
# else:
# # add our results to our global set
# results.append(_results)
#
# else:
# # Unsupported
# ConfigBase.logger.warning(
# 'Unsupported apprise yaml entry #{}'.format(no + 1))
# continue
#
# # Track our entries
# entry = 0
#
# while len(results):
# # Increment our entry count
# entry += 1
#
# # Grab our first item
# _results = results.pop(0)
#
# # tag is a special keyword that is managed by apprise object.
# # The below ensures our tags are set correctly
# if 'tag' in _results:
# # Tidy our list up
# _results['tag'] = \
# set(parse_list(_results['tag'])) | global_tags
#
# else:
# # Just use the global settings
# _results['tag'] = global_tags
#
# ConfigBase.logger.trace(
# 'URL #{}: {} unpacked as:{}{}'
# .format(no + 1, url, os.linesep, os.linesep.join(
# ['{}="{}"'.format(k, a)
# for k, a in _results.items()])))
#
# # Prepare our Asset Object
# _results['asset'] = asset
#
# try:
# # Attempt to create an instance of our plugin using the
# # parsed URL information
# plugin = plugins.SCHEMA_MAP[_results['schema']](**_results)
#
# # Create log entry of loaded URL
# ConfigBase.logger.debug(
# 'Loaded URL: {}'.format(plugin.url()))
#
# except Exception:
# # the arguments are invalid or can not be used.
# ConfigBase.logger.warning(
# 'Could not load apprise yaml entry #{}, item #{}'
# .format(no + 1, entry))
# continue
#
# # if we reach here, we successfully loaded our data
# response.append(plugin)
#
# return response
#
def pop(self, index):
"""
Removes an indexed Notification Service from the stack and
returns it.
"""
if not isinstance(self._cached_servers, list):
# Generate ourselves a list of content we can pull from
self.servers(cache=True)
# Pop the element off of the stack
return self._cached_servers.pop(index)
def __getitem__(self, index):
"""
Returns the indexed server entry associated with the loaded
notification servers
"""
if not isinstance(self._cached_servers, list):
# Generate ourselves a list of content we can pull from
self.servers(cache=True)
return self._cached_servers[index]
def __iter__(self):
"""
Returns an iterator to our server list
"""
if not isinstance(self._cached_servers, list):
# Generate ourselves a list of content we can pull from
self.servers(cache=True)
return iter(self._cached_servers)
def __len__(self):
"""
Returns the total number of servers loaded
"""
if not isinstance(self._cached_servers, list):
# Generate ourselves a list of content we can pull from
self.servers(cache=True)
return len(self._cached_servers)