You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

290 lines
11 KiB

Change core system to improve performance and facilitate multi TV info sources. Change migrate core objects TVShow and TVEpisode and everywhere that these objects affect. Add message to logs and disable ui backlog buttons when no media provider has active and/or scheduled searching enabled. Change views for py3 compat. Change set default runtime of 5 mins if none is given for layout Day by Day. Add OpenSubtitles authentication support to config/Subtitles/Subtitles Plugin. Add "Enforce media hash match" to config/Subtitles Plugin/Opensubtitles for accurate subs if enabled, but if disabled, search failures will fallback to use less reliable subtitle results. Add Apprise 0.8.0 (6aa52c3). Add hachoir_py3 3.0a6 (5b9e05a). Add sgmllib3k 1.0.0 Update soupsieve 1.9.1 (24859cc) to soupsieve_py2 1.9.5 (6a38398) Add soupsieve_py3 2.0.0.dev (69194a2). Add Tornado_py3 Web Server 6.0.3 (ff985fe). Add xmlrpclib_to 0.1.1 (c37db9e). Remove ancient Growl lib 0.1 Remove xmltodict library. Change requirements.txt for Cheetah3 to minimum 3.2.4 Change update sabToSickBeard. Change update autoProcessTV. Change remove Twitter notifier. Update NZBGet Process Media extension, SickGear-NG 1.7 → 2.4 Update Kodi addon 1.0.3 → 1.0.4 Update ADBA for py3. Update Beautiful Soup 4.8.0 (r526) to 4.8.1 (r531). Update Send2Trash 1.3.0 (a568370) to 1.5.0 (66afce7). Update soupsieve 1.9.1 (24859cc) to 1.9.5 (6a38398). Change use GNTP (Growl Notification Transport Protocol) from Apprise. Change add multi host support to Growl notifier. Fix Growl notifier when using empty password. Change update links for Growl notifications. Change deprecate confg/Notifications/Growl password field as these are now stored with host setting. Fix prevent infinite memoryError from a particular jpg data structure. Change subliminal for py3. Change enzyme for py3. Change browser_ua for py3. Change feedparser for py3 (sgmlib is no longer available on py3 as standardlib so added ext lib) Fix Guessit. Fix parse_xml for py3. Fix name parser with multi eps for py3. Fix tvdb_api fixes for py3 (search show). Fix config/media process to only display "pattern is invalid" qtip on "Episode naming" tab if the associated field is actually visible. Also, if the field becomes hidden due to a setting change, hide any previously displayed qtip. Note for Javascript::getelementbyid (or $('tag[id="<name>"')) is required when an id is being searched in the dom due to ":" used in a shows id name. Change download anidb xml files to main cache folder and use adba lib folder as a last resort. Change create get anidb show groups as centralised helper func and consolidate dupe code. Change move anidb related functions to newly renamed anime.py (from blacklistandwhitelist.py). Change str encode hex no longer exits in py3, use codecs.encode(...) instead. Change fix b64decode on py3 returns bytestrings. Change use binary read when downloading log file via browser to prevent any encoding issues. Change add case insensitive ordering to anime black/whitelist. Fix anime groups list not excluding whitelisted stuff. Change add Windows utf8 fix ... see: ytdl-org/youtube-dl#820 Change if no qualities are wanted, exit manual search thread. Fix keepalive for py3 process media. Change add a once a month update of tvinfo show mappings to the daily updater. Change autocorrect ids of new shows by updating from -8 to 31 days of the airdate of episode one. Add next run time to Manage/Show Tasks/Daily show update. Change when fetching imdb data, if imdb id is an episode id then try to find and use real show id. Change delete diskcache db in imdbpie when value error (due to change in Python version). Change during startup, cleanup any _cleaner.pyc/o to prevent issues when switching python versions. Add .pyc cleaner if python version is switched. Change replace deprecated gettz_db_metadata() and gettz. Change rebrand "SickGear PostProcessing script" to "SickGear Process Media extension". Change improve setup guide to use the NZBGet version to minimise displayed text based on version. Change NZBGet versions prior to v17 now told to upgrade as those version are no longer supported - code has actually exit on start up for some time but docs were outdated. Change comment out code and unused option sg_base_path. Change supported Python version 2.7.9-2.7.18 inclusive expanded to 3.7.1-3.8.1 inclusive. Change pidfile creation under Linux 0o644. Make logger accept lists to output continuously using the log_lock instead of split up by other processes. Fix long path issues with Windows process media.
6 years ago
"""
Blizzard BLP Image File Parser
Author: Robert Xiao
Creation date: July 10 2007
- BLP1 File Format
http://magos.thejefffiles.com/War3ModelEditor/MagosBlpFormat.txt
- BLP2 File Format (Wikipedia)
http://en.wikipedia.org/wiki/.BLP
- S3TC (DXT1, 3, 5) Formats
http://en.wikipedia.org/wiki/S3_Texture_Compression
"""
from hachoir_py3.core.endian import LITTLE_ENDIAN
from hachoir_py3.field import String, UInt32, UInt8, Enum, FieldSet, RawBytes, GenericVector, Bit, Bits
from hachoir_py3.parser.parser import Parser
from hachoir_py3.parser.image.common import PaletteRGBA
from hachoir_py3.core.tools import alignValue
class PaletteIndex(UInt8):
def createDescription(self):
return "Palette index %i (%s)" % (self.value, self["/palette/color[%i]" % self.value].description)
class Generic2DArray(FieldSet):
def __init__(self, parent, name, width, height, item_class, row_name="row", item_name="item", *args, **kwargs):
FieldSet.__init__(self, parent, name, *args, **kwargs)
self.width = width
self.height = height
self.item_class = item_class
self.row_name = row_name
self.item_name = item_name
def createFields(self):
for i in range(self.height):
yield GenericVector(self, self.row_name + "[]", self.width, self.item_class, self.item_name)
class BLP1File(Parser):
MAGIC = b"BLP1"
PARSER_TAGS = {
"id": "blp1",
"category": "game",
"file_ext": ("blp",),
"mime": ("application/x-blp",), # TODO: real mime type???
"magic": ((MAGIC, 0),),
"min_size": 7 * 32, # 7 DWORDs start, incl. magic
"description": "Blizzard Image Format, version 1",
}
endian = LITTLE_ENDIAN
def validate(self):
if self.stream.readBytes(0, 4) != b"BLP1":
return "Invalid magic"
return True
def createFields(self):
yield String(self, "magic", 4, "Signature (BLP1)")
yield Enum(UInt32(self, "compression"), {
0: "JPEG Compression",
1: "Uncompressed"})
yield UInt32(self, "flags")
yield UInt32(self, "width")
yield UInt32(self, "height")
yield Enum(UInt32(self, "type"), {
3: "Uncompressed Index List + Alpha List",
4: "Uncompressed Index List + Alpha List",
5: "Uncompressed Index List"})
yield UInt32(self, "subtype")
for i in range(16):
yield UInt32(self, "mipmap_offset[]")
for i in range(16):
yield UInt32(self, "mipmap_size[]")
compression = self["compression"].value
image_type = self["type"].value
width = self["width"].value
height = self["height"].value
if compression == 0: # JPEG Compression
yield UInt32(self, "jpeg_header_len")
yield RawBytes(self, "jpeg_header", self["jpeg_header_len"].value, "Shared JPEG Header")
else:
yield PaletteRGBA(self, "palette", 256)
offsets = self.array("mipmap_offset")
sizes = self.array("mipmap_size")
for i in range(16):
if not offsets[i].value or not sizes[i].value:
continue
padding = self.seekByte(offsets[i].value)
if padding:
yield padding
if compression == 0:
yield RawBytes(self, "mipmap[%i]" % i, sizes[i].value, "JPEG data, append to header to recover complete image")
elif compression == 1:
yield Generic2DArray(self, "mipmap_indexes[%i]" % i, width, height, PaletteIndex, "row", "index", "Indexes into the palette")
if image_type in (3, 4):
yield Generic2DArray(self, "mipmap_alphas[%i]" % i, width, height, UInt8, "row", "alpha", "Alpha values")
width /= 2
height /= 2
def interp_avg(data_low, data_high, n):
"""Interpolated averages. For example,
>>> list(interp_avg(1, 10, 3))
[4, 7]
"""
if isinstance(data_low, int):
for i in range(1, n):
yield (data_low * (n - i) + data_high * i) / n
else: # iterable
pairs = list(zip(data_low, data_high))
pair_iters = [interp_avg(x, y, n) for x, y in pairs]
for i in range(1, n):
yield [next(iter) for iter in pair_iters]
def color_name(data, bits):
"""Color names in #RRGGBB format, given the number of bits for each component."""
ret = ["#"]
for i in range(3):
ret.append("%02X" % (data[i] << (8 - bits[i])))
return ''.join(ret)
class DXT1(FieldSet):
static_size = 64
def __init__(self, parent, name, dxt2_mode=False, *args, **kwargs):
"""with dxt2_mode on, this field will always use the four color model"""
FieldSet.__init__(self, parent, name, *args, **kwargs)
self.dxt2_mode = dxt2_mode
def createFields(self):
values = [[], []]
for i in (0, 1):
yield Bits(self, "blue[]", 5)
yield Bits(self, "green[]", 6)
yield Bits(self, "red[]", 5)
values[i] = [self["red[%i]" % i].value,
self["green[%i]" % i].value,
self["blue[%i]" % i].value]
if values[0] > values[1] or self.dxt2_mode:
values += interp_avg(values[0], values[1], 3)
else:
values += interp_avg(values[0], values[1], 2)
values.append(None) # transparent
for i in range(16):
pixel = Bits(self, "pixel[%i][%i]" % divmod(i, 4), 2)
color = values[pixel.value]
if color is None:
pixel._description = "Transparent"
else:
pixel._description = "RGB color: %s" % color_name(color, [
5, 6, 5])
yield pixel
class DXT3Alpha(FieldSet):
static_size = 64
def createFields(self):
for i in range(16):
yield Bits(self, "alpha[%i][%i]" % divmod(i, 4), 4)
class DXT3(FieldSet):
static_size = 128
def createFields(self):
yield DXT3Alpha(self, "alpha", "Alpha Channel Data")
yield DXT1(self, "color", True, "Color Channel Data")
class DXT5Alpha(FieldSet):
static_size = 64
def createFields(self):
values = []
yield UInt8(self, "alpha_val[0]", "First alpha value")
values.append(self["alpha_val[0]"].value)
yield UInt8(self, "alpha_val[1]", "Second alpha value")
values.append(self["alpha_val[1]"].value)
if values[0] > values[1]:
values += interp_avg(values[0], values[1], 7)
else:
values += interp_avg(values[0], values[1], 5)
values += [0, 255]
for i in range(16):
pixel = Bits(self, "alpha[%i][%i]" % divmod(i, 4), 3)
alpha = values[pixel.value]
pixel._description = "Alpha value: %i" % alpha
yield pixel
class DXT5(FieldSet):
static_size = 128
def createFields(self):
yield DXT5Alpha(self, "alpha", "Alpha Channel Data")
yield DXT1(self, "color", True, "Color Channel Data")
class BLP2File(Parser):
MAGIC = b"BLP2"
PARSER_TAGS = {
"id": "blp2",
"category": "game",
"file_ext": ("blp",),
"mime": ("application/x-blp",),
"magic": ((MAGIC, 0),),
"min_size": 5 * 32, # 5 DWORDs start, incl. magic
"description": "Blizzard Image Format, version 2",
}
endian = LITTLE_ENDIAN
def validate(self):
if self.stream.readBytes(0, 4) != b"BLP2":
return "Invalid magic"
return True
def createFields(self):
yield String(self, "magic", 4, "Signature (BLP2)")
yield Enum(UInt32(self, "compression", "Compression type"), {
0: "JPEG Compressed",
1: "Uncompressed or DXT/S3TC compressed"})
yield Enum(UInt8(self, "encoding", "Encoding type"), {
1: "Raw",
2: "DXT/S3TC Texture Compression (a.k.a. DirectX)"})
yield UInt8(self, "alpha_depth", "Alpha channel depth, in bits (0 = no alpha)")
yield Enum(UInt8(self, "alpha_encoding", "Encoding used for alpha channel"), {
0: "DXT1 alpha (0 or 1 bit alpha)",
1: "DXT3 alpha (4 bit alpha)",
7: "DXT5 alpha (8 bit interpolated alpha)"})
yield Enum(UInt8(self, "has_mips", "Are mip levels present?"), {
0: "No mip levels",
1: "Mip levels present; number of levels determined by image size"})
yield UInt32(self, "width", "Base image width")
yield UInt32(self, "height", "Base image height")
for i in range(16):
yield UInt32(self, "mipmap_offset[]")
for i in range(16):
yield UInt32(self, "mipmap_size[]")
yield PaletteRGBA(self, "palette", 256)
compression = self["compression"].value
encoding = self["encoding"].value
alpha_depth = self["alpha_depth"].value
alpha_encoding = self["alpha_encoding"].value
width = self["width"].value
height = self["height"].value
if compression == 0: # JPEG Compression
yield UInt32(self, "jpeg_header_len")
yield RawBytes(self, "jpeg_header", self["jpeg_header_len"].value, "Shared JPEG Header")
offsets = self.array("mipmap_offset")
sizes = self.array("mipmap_size")
for i in range(16):
if not offsets[i].value or not sizes[i].value:
continue
padding = self.seekByte(offsets[i].value)
if padding:
yield padding
if compression == 0:
yield RawBytes(self, "mipmap[%i]" % i, sizes[i].value, "JPEG data, append to header to recover complete image")
elif compression == 1 and encoding == 1:
yield Generic2DArray(self, "mipmap_indexes[%i]" % i, height, width, PaletteIndex, "row", "index", "Indexes into the palette")
if alpha_depth == 1:
yield GenericVector(self, "mipmap_alphas[%i]" % i, height, width, Bit, "row", "is_opaque", "Alpha values")
elif alpha_depth == 8:
yield GenericVector(self, "mipmap_alphas[%i]" % i, height, width, UInt8, "row", "alpha", "Alpha values")
elif compression == 1 and encoding == 2:
block_height = alignValue(height, 4) // 4
block_width = alignValue(width, 4) // 4
if alpha_depth in [0, 1] and alpha_encoding == 0:
yield Generic2DArray(self, "mipmap[%i]" % i, block_height, block_width, DXT1, "row", "block", "DXT1-compressed image blocks")
elif alpha_depth == 8 and alpha_encoding == 1:
yield Generic2DArray(self, "mipmap[%i]" % i, block_height, block_width, DXT3, "row", "block", "DXT3-compressed image blocks")
elif alpha_depth == 8 and alpha_encoding == 7:
yield Generic2DArray(self, "mipmap[%i]" % i, block_height, block_width, DXT5, "row", "block", "DXT5-compressed image blocks")
width /= 2
height /= 2