Compare commits

...

141 Commits

Author SHA1 Message Date
Ruud 096267376b Movie base class extensions 10 years ago
Ruud 911e254298 TV cleanup 10 years ago
Ruud c8504d0ae2 Merge branch 'develop' into tv_redesign 10 years ago
Ruud Burger 2bca2863ae Merge pull request #5216 from k0ekk0ek/extratorrent_provider 10 years ago
Ruud b5a401d9e5 Combined JS 10 years ago
Ruud 6e373a0f19 Merge branch 'develop' into tv_develop_merged 10 years ago
Jeroen Koekkoek 0f57e9369e New torrent provider ExtraTorrent 10 years ago
Ruud c89f4e5393 Merge branch 'develop' into tv 10 years ago
Ruud Burger d5b737fc77 Merge pull request #5196 from k0ekk0ek/provider_type_info 10 years ago
Ruud Burger cb98b06fbd Merge pull request #5192 from k0ekk0ek/show_switch 10 years ago
Ruud Burger d01fc73081 Merge pull request #5188 from k0ekk0ek/searcher_merge 10 years ago
Jeroen Koekkoek 83cf2db1e9 [TV] Fix YarrProvider constructor not being invoked 10 years ago
Jeroen Koekkoek b74d6a4eb7 [TV] Also remove nzbindex and publichd in tv branch 10 years ago
Jeroen Koekkoek cf82f5f422 [TV][Provider] Show which media types are supported by torrent and nzb providers in the user interface 10 years ago
Jeroen Koekkoek a85f35c33b [TV] Make show support configurable 10 years ago
Jeroen Koekkoek c9faf31ee8 [TV][Searcher] Merge show and movie searcher code 10 years ago
Ruud f390624382 Merge branch 'develop' into tv 10 years ago
Ruud Burger 756d0451a2 Merge pull request #5156 from k0ekk0ek/traktv2_info_provider 10 years ago
Jeroen Koekkoek 1bcb9af4ef [TV][Provider] Move to Trakt.tv V2 API 10 years ago
Ruud Burger 1cbbbfc38d Merge pull request #4940 from k0ekk0ek/kickass_show_provider 10 years ago
Ruud Burger f8b90905d0 Merge pull request #4941 from k0ekk0ek/tvrage_info_provider 10 years ago
Jeroen Koekkoek ad85b2ff95 [TV][Provider] Added TVRage info provider 10 years ago
Jeroen Koekkoek 728e96a44d [TV][Provider] Added Episode and Season providers for KickassTorrents 10 years ago
Ruud de5fb285a1 Merge branch 'tv_quality' of https://github.com/fuzeman/CouchPotatoServer into fuzeman-tv_quality 10 years ago
Ruud bc0262389c Merge branch 'tv' of github.com:RuudBurger/CouchPotatoServer into tv 10 years ago
Ruud 40784f3c4e Merge branch 'develop' into tv 10 years ago
Ruud Burger 147e565249 Merge pull request #4157 from seedzero/tv-develop-merge 11 years ago
seedzero 4e568ff515 Merge branch 'develop' into tv 11 years ago
Ruud Burger 6c586f8b19 Merge pull request #3909 from seedzero/tv-list-api 11 years ago
seedzero bb609e073b [TV] Fix for new 'list' API output 11 years ago
seedzero 02571d0f5d 'list' API's, return as media type 11 years ago
Dean Gardiner 106c5c2d7f [TV] Catch missing "video" info in caper result chain 11 years ago
Dean Gardiner 1b0df8fe45 [TV] Adjustments to "quality.guess" to ensure omitted properties decrease the quality score 11 years ago
Dean Gardiner a11aa2c14e [TV] Fixed show adding bug when show/episode identifiers conflict 11 years ago
Dean Gardiner f070b18d0f Added "types" parameter to "media.with_identifiers" 11 years ago
Dean Gardiner 8c51b8548c [TV] Use "searcher.contains_other_quality" in season searcher, added "480p" quality preset, other small fixes 11 years ago
Dean Gardiner 2ec81b3de6 Support for wildcard provider categories 11 years ago
Dean Gardiner aa02f4d977 [TV] "quality.guess" working, searcher changed to use "searcher.contains_other_quality" now 11 years ago
Dean Gardiner 1c6026d0a2 Removed old "matcher.correct_quality", changes/fixes to support extended quality format 11 years ago
Dean Gardiner 6fbd5e0f3a [TV] New cat_ids structure for extended quality system 11 years ago
Dean Gardiner a0b0e3055e Added "types" parameter to "quality.single", support for property matching in getCatId() 11 years ago
Dean Gardiner 2f48dc2bca [TV] Initial draft of properties/qualities structure 11 years ago
Dean Gardiner 058d241c73 Moved "quality.single" to "quality/main", created "quality.get" event 11 years ago
Dean Gardiner 8836cf7684 [TV] Added shell "quality.guess" 11 years ago
Dean Gardiner f4a3b2eccc Moved "movie" guessing into "movie/quality", added "type" filter to "quality" plugin and base searcher methods 11 years ago
Dean Gardiner 5c62144403 [TV] Removed old "quality_order" in searcher 11 years ago
Dean Gardiner c18e284aa6 Split "quality" plugin into media-specific plugins 11 years ago
Dean Gardiner 60e8c3ad9b Merge branch 'develop_tv_sync' into tv 11 years ago
Dean Gardiner 894f46a741 Merge branch 'develop' into tv 11 years ago
Dean Gardiner 7d5efad20c Merge pull request #3817 from seedzero/tv 11 years ago
seedzero ba14c95e82 Documentation added for media type .list & .delete APIs 11 years ago
seedzero 2ad249b195 Fixed media.types & addSingleListView 11 years ago
Dean Gardiner deb7943203 Fixed broken quality profile identifiers 11 years ago
Dean Gardiner 4e78b0cac1 Merge pull request #3750 from seedzero/tv 11 years ago
seedzero c8f0cdc90f Newznab search fixes 11 years ago
seedzero ce80ac5a33 Fix show search not including quality profile 11 years ago
seedzero 5e438e5343 Stop movie searcher searching for TV shows and 11 years ago
Dean Gardiner 12dd9c6b14 [TV] Updated ShowBase.create() to use "media.with_identifiers" 11 years ago
Dean Gardiner 478dc0f242 Changed "media.with_identifiers" to remove "No media found with..." messages 11 years ago
Dean Gardiner 5d886ccf1f [TV] Moved "episode" and "season" modules into "show/_base/", fixed episode update bug 11 years ago
Dean Gardiner 7f466f9c08 [TV] Split matcher into separate modules 11 years ago
Dean Gardiner 7fbd89a317 [TV] Use trakt.tv for show searches (better text searching, posters) 11 years ago
Dean Gardiner 6f620f451b Merge branch 'tv_season_searcher' into tv 11 years ago
Dean Gardiner dea5bbbf1c Update score plugin to use the "root" media (show, movie) title 11 years ago
Dean Gardiner 68bde6086d [TV] Fixed incorrect 'release.delete' call in searcher and issue adding shows 11 years ago
Dean Gardiner 34bb8c7993 [TV] Fixed issue retrieving episodes in season searcher 11 years ago
Dean Gardiner 74c7cf4381 Added children to "library.related" 11 years ago
Dean Gardiner efe0a4af53 [TV] Minor adjustments to season item UI 11 years ago
Dean Gardiner b9c6d983e1 [TV] Added season actions/releases 11 years ago
Dean Gardiner 3d6ce1c2e2 [TV} Working show and season searcher, fixed season correctRelease/matcher 11 years ago
Dean Gardiner a06bfcb3bf Merge branch 'tv_xem' into tv 11 years ago
Dean Gardiner fe2e508e4c Fix possible dashboard error, add "types" parameter to "media.with_status", limit suggestions to movies (for now) 11 years ago
Dean Gardiner 72cb53bcc0 [TV] Fixed xem episode updates and finished adding "update_extras" events 11 years ago
Dean Gardiner 90be6ec38b [TV] Renamed "[media].update_info" events, renamed "updateInfo" functions 11 years ago
Dean Gardiner 212d5c5432 Renamed "[media].update_info" event to "[media].update" 11 years ago
Dean Gardiner b10e25ab8c [TV] Disabled excessive logging from tvdb_api 11 years ago
Dean Gardiner 5c4f8186df [TV] Restructured and cleaned "show.add" and "show.update_info" 11 years ago
Dean Gardiner 02d4a7625b [TV] Fixes to xem info provider, updated data structure 11 years ago
Dean Gardiner 8018ef979f [TV] Fixes to TheTVDb.getSeasonInfo 11 years ago
Dean Gardiner 482f5f82e6 [TV] Disable tvdb query simplifying (API doesn't support "fuzzy" matching) 11 years ago
Dean Gardiner 88f8cd708b [TV] Implemented fast show updates, working on "update_info" restructure 11 years ago
Dean Gardiner aa92d76eb4 Added "media_id" parameter to "library.tree" event 11 years ago
Dean Gardiner 3e05bc8d78 Added "find" helper function 11 years ago
Dean Gardiner 4de9879927 [TV] Fixed dashboard issues with shows 11 years ago
Dean Gardiner 479e20d8f3 [TV] Added "eta" display placeholder (data not there yet) 11 years ago
Dean Gardiner f7ed5d4b2f Merge remote-tracking branch 'RuudBurger/develop' into tv 11 years ago
Dean Gardiner bda44848a1 [TV] Added "full_search" placeholder methods to avoid errors on startup 11 years ago
Dean Gardiner f3ae8a05cc [TV] Added "status" to episode and season media 11 years ago
Dean Gardiner 43275297e9 [TV] Improved episode actions drop-down (releases) 11 years ago
Dean Gardiner d79556f36f [TV] Moved imdb and refresh components to new "episode.actions.js", implemented episode "release" action 11 years ago
Dean Gardiner 8fe3d6f58f [TV] Adjust episode table column size, added quality indicators 11 years ago
Dean Gardiner a1ca367037 Include releases in "library.tree" 11 years ago
Dean Gardiner bfdf565a0d [TV] Changed show list to call "media.available_chars" correctly 11 years ago
Dean Gardiner c77eaabbff [TV] Update messages containing "movie", fixed alignment and search box 11 years ago
Dean Gardiner 44063dfcc5 [TV] Only expand/extend height when showing the episodes view 11 years ago
Dean Gardiner c2c98f644b [TV] Fixed matcher and provider events 11 years ago
Dean Gardiner 74caecbe89 Merge branch 'tv_interface' into tv 11 years ago
Dean Gardiner a721a40d5e Merge branch 'tv_metadata' into tv 11 years ago
Dean Gardiner 338e645579 Merge branch 'tv_searcher' into tv 11 years ago
Dean Gardiner 5f2dd0aac3 [TV] Fixed episode info updates 11 years ago
Dean Gardiner 0f434afd33 [TV] Prefix child media types with "show." 11 years ago
Dean Gardiner 364527b0b2 Fixed "library.related" and "libary.tree" to work with "show.episode", 'show.season" media types 11 years ago
Dean Gardiner ac857301ac [TV] Create "Episode" class, "media.refresh" is now fired 11 years ago
Dean Gardiner c038c66dc9 Switched "library.tree" to use "media_children" index 11 years ago
Dean Gardiner c81891683c [TV] Cleaner season/episode titles in list, move specials to bottom 11 years ago
Dean Gardiner d787cb0cdb [TV] Build out basic show interface with episode list 11 years ago
Dean Gardiner 2d5a3e7564 Added "library.tree" event/api call 11 years ago
Dean Gardiner 7ae178e2a6 Fixed MediaBase.getPoster(), switched MovieBase to use this generic method 11 years ago
Dean Gardiner e885ade131 [TV] Fixed show posters 11 years ago
Dean Gardiner 0925dd08bc [TV] Split searcher into separate modules, searching/snatching mostly working again 11 years ago
Dean Gardiner 050d8ccfda Added "library.root" event, fixes to "matcher", "release" and "score" to use "library.root" + handle missing "year" 11 years ago
Dean Gardiner 4efdca91d5 [TV] Added temporary TV qualities 11 years ago
Dean Gardiner 0d128a3525 [TV} Fixed query/identifier event handlers and moved them to [media.show.library] 11 years ago
Dean Gardiner 0f97e57307 Added "library.related" event and "library.query", "library.related" API calls 11 years ago
Ruud 6833e78546 Set correct branch 11 years ago
Ruud 30c56f29d0 Merge branch 'refs/heads/develop' into tv 11 years ago
Dean Gardiner 7ed0c6f099 Fixed missing identifiers for 'thetvdb' 11 years ago
Ruud af64961502 Episode searching 11 years ago
Ruud 342e61da48 Show searcher init 11 years ago
Ruud 8ce30f0aad Nested media index 11 years ago
Ruud 63b8e3ff1a Remove downloaders.js from clientscript 11 years ago
Ruud 91c3df7c46 Use correct super class 11 years ago
Ruud ae3d9c0a0a Add wanted shows 11 years ago
Ruud 090eb6f14d Allow type option in listing 11 years ago
Ruud 44de06f518 Merge branch 'refs/heads/develop' into tv 11 years ago
Ruud b23db7541d Use correct key to check success 11 years ago
Ruud 7410288781 Merge branch 'refs/heads/develop' into tv 11 years ago
Ruud bb4252363d Merge branch 'refs/heads/develop' into tv 11 years ago
Ruud 0a0a1704be Add show updated 11 years ago
Ruud b13b32952f Only add rating if available 11 years ago
Ruud 0978ac33bc Merge branch 'refs/heads/develop' into tv 11 years ago
Ruud 6e8b7d25e5 Don't try to parse episodes if they aren't in the data 11 years ago
Ruud 0f555dbb85 Merge branch 'refs/heads/develop' into tv 11 years ago
Ruud 43e4ed6e2d Merge branch 'refs/heads/develop' into tv 11 years ago
Ruud 2e50eb487c Cleanup 11 years ago
Ruud 70e5f1a6d8 Merge branch 'refs/heads/develop' into tv 11 years ago
Ruud 9cfa7fa2a3 Cleanup 11 years ago
Ruud cfc9f524a7 Cleanup 11 years ago
Ruud 8281fdc08b Show cleanup 11 years ago
Ruud 949f76cd50 Merge branch 'refs/heads/nosql' into tv 11 years ago
Ruud 9631be1ee4 Move tv branch to nosql 11 years ago
  1. 5
      couchpotato/core/helpers/variable.py
  2. 18
      couchpotato/core/media/_base/matcher/main.py
  3. 60
      couchpotato/core/media/_base/media/main.py
  4. 81
      couchpotato/core/media/_base/providers/base.py
  5. 4
      couchpotato/core/media/_base/providers/nzb/base.py
  6. 4
      couchpotato/core/media/_base/providers/torrent/base.py
  7. 143
      couchpotato/core/media/_base/providers/torrent/extratorrent.py
  8. 152
      couchpotato/core/media/_base/providers/torrent/kickasstorrents.py
  9. 7
      couchpotato/core/media/_base/quality/__init__.py
  10. 185
      couchpotato/core/media/_base/quality/base.py
  11. 0
      couchpotato/core/media/_base/quality/index.py
  12. 82
      couchpotato/core/media/_base/quality/main.py
  13. 0
      couchpotato/core/media/_base/quality/static/quality.js
  14. 185
      couchpotato/core/media/_base/searcher/main.py
  15. 10
      couchpotato/core/media/movie/_base/static/list.js
  16. 4
      couchpotato/core/media/movie/_base/static/search.js
  17. 12
      couchpotato/core/media/movie/providers/torrent/extratorrent.py
  18. 73
      couchpotato/core/media/movie/providers/torrent/kickasstorrents.py
  19. 0
      couchpotato/core/media/movie/quality/__init__.py
  20. 254
      couchpotato/core/media/movie/quality/main.py
  21. 179
      couchpotato/core/media/movie/searcher.py
  22. 55
      couchpotato/core/media/show/__init__.py
  23. 4
      couchpotato/core/media/show/_base/__init__.py
  24. 111
      couchpotato/core/media/show/_base/episode.py
  25. 289
      couchpotato/core/media/show/_base/main.py
  26. 96
      couchpotato/core/media/show/_base/season.py
  27. 0
      couchpotato/core/media/show/_base/static/episode.actions.js
  28. 128
      couchpotato/core/media/show/_base/static/episode.js
  29. 8
      couchpotato/core/media/show/_base/static/list.js
  30. 56
      couchpotato/core/media/show/_base/static/page.js
  31. 7
      couchpotato/core/media/show/_base/static/search.js
  32. 127
      couchpotato/core/media/show/_base/static/season.js
  33. 92
      couchpotato/core/media/show/_base/static/show.episodes.js
  34. 5
      couchpotato/core/media/show/_base/static/show.js
  35. 1225
      couchpotato/core/media/show/_base/static/show.scss
  36. 28
      couchpotato/core/media/show/_base/static/wanted.js
  37. 0
      couchpotato/core/media/show/library/__init__.py
  38. 71
      couchpotato/core/media/show/library/episode.py
  39. 52
      couchpotato/core/media/show/library/season.py
  40. 38
      couchpotato/core/media/show/library/show.py
  41. 7
      couchpotato/core/media/show/matcher/__init__.py
  42. 61
      couchpotato/core/media/show/matcher/base.py
  43. 30
      couchpotato/core/media/show/matcher/episode.py
  44. 9
      couchpotato/core/media/show/matcher/main.py
  45. 27
      couchpotato/core/media/show/matcher/season.py
  46. 0
      couchpotato/core/media/show/providers/__init__.py
  47. 13
      couchpotato/core/media/show/providers/base.py
  48. 0
      couchpotato/core/media/show/providers/info/__init__.py
  49. 376
      couchpotato/core/media/show/providers/info/thetvdb.py
  50. 64
      couchpotato/core/media/show/providers/info/trakt.py
  51. 285
      couchpotato/core/media/show/providers/info/tvrage.py
  52. 216
      couchpotato/core/media/show/providers/info/xem.py
  53. 0
      couchpotato/core/media/show/providers/nzb/__init__.py
  54. 51
      couchpotato/core/media/show/providers/nzb/binsearch.py
  55. 49
      couchpotato/core/media/show/providers/nzb/newznab.py
  56. 52
      couchpotato/core/media/show/providers/nzb/nzbclub.py
  57. 0
      couchpotato/core/media/show/providers/torrent/__init__.py
  58. 36
      couchpotato/core/media/show/providers/torrent/bithdtv.py
  59. 41
      couchpotato/core/media/show/providers/torrent/bitsoup.py
  60. 24
      couchpotato/core/media/show/providers/torrent/extratorrent.py
  61. 28
      couchpotato/core/media/show/providers/torrent/iptorrents.py
  62. 34
      couchpotato/core/media/show/providers/torrent/kickasstorrents.py
  63. 60
      couchpotato/core/media/show/providers/torrent/sceneaccess.py
  64. 46
      couchpotato/core/media/show/providers/torrent/thepiratebay.py
  65. 34
      couchpotato/core/media/show/providers/torrent/torrentday.py
  66. 42
      couchpotato/core/media/show/providers/torrent/torrentleech.py
  67. 38
      couchpotato/core/media/show/providers/torrent/torrentpotato.py
  68. 52
      couchpotato/core/media/show/providers/torrent/torrentshack.py
  69. 0
      couchpotato/core/media/show/quality/__init__.py
  70. 196
      couchpotato/core/media/show/quality/main.py
  71. 0
      couchpotato/core/media/show/searcher/__init__.py
  72. 109
      couchpotato/core/media/show/searcher/episode.py
  73. 137
      couchpotato/core/media/show/searcher/season.py
  74. 93
      couchpotato/core/media/show/searcher/show.py
  75. 5
      couchpotato/core/plugins/dashboard.py
  76. 5
      couchpotato/core/plugins/quality/__init__.py
  77. 19
      couchpotato/core/plugins/score/main.py
  78. 4
      couchpotato/core/plugins/score/scores.py
  79. 9
      couchpotato/static/scripts/block/navigation.js
  80. 14
      couchpotato/static/scripts/combined.base.min.js
  81. 800
      couchpotato/static/scripts/combined.plugins.min.js
  82. 8
      couchpotato/static/scripts/couchpotato.js
  83. 2
      couchpotato/static/style/combined.min.css
  84. 5
      couchpotato/templates/index.html
  85. 42
      libs/qcond/__init__.py
  86. 23
      libs/qcond/compat.py
  87. 84
      libs/qcond/helpers.py
  88. 0
      libs/qcond/transformers/__init__.py
  89. 21
      libs/qcond/transformers/base.py
  90. 241
      libs/qcond/transformers/merge.py
  91. 280
      libs/qcond/transformers/slice.py
  92. 26
      libs/qcond/transformers/strip_common.py
  93. 4
      libs/tvdb_api/.gitignore
  94. 9
      libs/tvdb_api/.travis.yml
  95. 4
      libs/tvdb_api/MANIFEST.in
  96. 103
      libs/tvdb_api/Rakefile
  97. 26
      libs/tvdb_api/UNLICENSE
  98. 0
      libs/tvdb_api/__init__.py
  99. 109
      libs/tvdb_api/readme.md
  100. 35
      libs/tvdb_api/setup.py

5
couchpotato/core/helpers/variable.py

@ -8,6 +8,7 @@ import re
import string
import sys
import traceback
import time
from couchpotato.core.helpers.encoding import simplifyString, toSafeString, ss, sp
from couchpotato.core.logger import CPLog
@ -411,3 +412,7 @@ def find(func, iterable):
return item
return None
def strtotime(string, format):
timestamp = time.strptime(string, format)
return time.mktime(timestamp)

18
couchpotato/core/media/_base/matcher/main.py

@ -21,7 +21,6 @@ class Matcher(MatcherBase):
addEvent('matcher.construct_from_raw', self.constructFromRaw)
addEvent('matcher.correct_title', self.correctTitle)
addEvent('matcher.correct_quality', self.correctQuality)
def parse(self, name, parser='scene'):
return self.caper.parse(name, parser)
@ -70,20 +69,3 @@ class Matcher(MatcherBase):
return True
return False
def correctQuality(self, chain, quality, quality_map):
if quality['identifier'] not in quality_map:
log.info2('Wrong: unknown preferred quality %s', quality['identifier'])
return False
if 'video' not in chain.info:
log.info2('Wrong: no video tags found')
return False
video_tags = quality_map[quality['identifier']]
if not self.chainMatch(chain, 'video', video_tags):
log.info2('Wrong: %s tags not in chain', video_tags)
return False
return True

60
couchpotato/core/media/_base/media/main.py

@ -198,14 +198,25 @@ class MediaPlugin(MediaBase):
else:
yield ms
def withIdentifiers(self, identifiers, with_doc = False):
def withIdentifiers(self, identifiers, with_doc = False, types = None):
if types and not with_doc:
raise ValueError("Unable to filter types without with_doc = True")
db = get_db()
for x in identifiers:
try:
return db.get('media', '%s-%s' % (x, identifiers[x]), with_doc = with_doc)
except:
pass
items = db.get_many('media', '%s-%s' % (x, identifiers[x]), with_doc = with_doc)
if not items:
# No items found, move to next identifier
continue
for item in items:
if types and item['doc'].get('type') not in types:
# Type doesn't match request, move to next item
continue
return item
log.debug('No media found with identifiers: %s', identifiers)
return False
@ -273,10 +284,6 @@ class MediaPlugin(MediaBase):
for x in filter_by:
media_ids = [n for n in media_ids if n in filter_by[x]]
total_count = len(media_ids)
if total_count == 0:
return 0, []
offset = 0
limit = -1
if limit_offset:
@ -306,11 +313,30 @@ class MediaPlugin(MediaBase):
media_ids.remove(media_id)
if len(media_ids) == 0 or len(medias) == limit: break
return total_count, medias
# Sort media by type and return result
result = {}
# Create keys for media types we are listing
if types:
for media_type in types:
result['%ss' % media_type] = []
else:
for media_type in fireEvent('media.types', merge = True):
result['%ss' % media_type] = []
total_count = len(medias)
if total_count == 0:
return 0, result
for kind in medias:
result['%ss' % kind['type']].append(kind)
return total_count, result
def listView(self, **kwargs):
total_movies, movies = self.list(
total_count, result = self.list(
types = splitString(kwargs.get('type')),
status = splitString(kwargs.get('status')),
release_status = splitString(kwargs.get('release_status')),
@ -321,12 +347,12 @@ class MediaPlugin(MediaBase):
search = kwargs.get('search')
)
return {
'success': True,
'empty': len(movies) == 0,
'total': total_movies,
'movies': movies,
}
results = result
results['success'] = True
results['empty'] = len(result) == 0
results['total'] = total_count
return results
def addSingleListView(self):

81
couchpotato/core/media/_base/providers/base.py

@ -4,6 +4,7 @@ import re
import time
import traceback
import xml.etree.ElementTree as XMLTree
import inspect
try:
from xml.etree.ElementTree import ParseError as XmlParseError
@ -130,6 +131,46 @@ class YarrProvider(Provider):
addEvent('provider.belongs_to', self.belongsTo)
addEvent('provider.search.%s.%s' % (self.protocol, self.type), self.search)
# The frontend requires the supported media types to be known for every
# torrent and nzb provider in order for the user to select the appropriate
# provider for the content he or she whishes to consume.
def addSupportedMediaType(self, module):
section = None
for base in self.__class__.__bases__:
parts = inspect.getmodule(base).__name__.split('.')
try:
section = parts[ parts.index(module) + 1 ].lower()
break
except:
pass
settings = Env.get('settings')
head = 'Supported media types: '
values = ['', '']
groups = settings.options[section].get('groups', [])
groupno = 0
while groupno < len(groups):
if 'description' in groups[groupno]:
if isinstance(groups[groupno]['description'], list):
values = groups[groupno]['description']
else:
values[0] = groups[groupno]['description']
break
assert len(values) == 2
# Verify the second entry was created by us.
assert not values[1] or values[1].startswith(head)
types = [t.strip() for t in values[1].replace(head, '').split(',') if t]
if not self.type.title() in types:
types.append(self.type.title())
values[1] = head + ', '.join(types)
settings.options[section]['groups'][groupno]['description'] = values
def getEnabledProtocol(self):
if self.isEnabled():
return self.protocol
@ -266,8 +307,8 @@ class YarrProvider(Provider):
if quality.get('custom'):
want_3d = quality['custom'].get('3d')
for ids, qualities in self.cat_ids:
if identifier in qualities or (want_3d and '3d' in qualities):
for ids, value in self.cat_ids:
if self.categoryMatch(value, quality, identifier, want_3d):
return ids
if self.cat_backup_id:
@ -275,6 +316,42 @@ class YarrProvider(Provider):
return []
def categoryMatch(self, value, quality, identifier, want_3d):
if type(value) is list:
# Basic identifier matching
if identifier in value:
return True
if want_3d and '3d' in value:
return True
return False
if type(value) is dict:
if not value:
# Wildcard category
return True
# Property matching
for key in ['codec', 'resolution', 'source']:
if key not in quality:
continue
for required in quality.get(key):
# Ensure category contains property list
if key not in value:
return False
# Ensure required property is in category
if required not in value[key]:
return False
# Valid
return True
# Unknown failure
return False
class ResultList(list):

4
couchpotato/core/media/_base/providers/nzb/base.py

@ -7,5 +7,9 @@ class NZBProvider(YarrProvider):
protocol = 'nzb'
def __init__(self):
super(NZBProvider, self).__init__()
self.addSupportedMediaType('nzb')
def calculateAge(self, unix):
return int(time.time() - unix) / 24 / 60 / 60

4
couchpotato/core/media/_base/providers/torrent/base.py

@ -17,6 +17,10 @@ class TorrentProvider(YarrProvider):
proxy_domain = None
proxy_list = []
def __init__(self):
super(TorrentProvider, self).__init__()
self.addSupportedMediaType('torrent')
def imdbMatch(self, url, imdbId):
if getImdb(url) == imdbId:
return True

143
couchpotato/core/media/_base/providers/torrent/extratorrent.py

@ -0,0 +1,143 @@
import math
import re
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.variable import tryInt, tryFloat
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
log = CPLog(__name__)
class Base(TorrentProvider):
field_link = 0
field_name = 2
field_size = 3
field_seeders = 4
field_leechers = 5
max_pages = 2
category = 0 # any category
urls = {
'url': '%s%s',
'detail': '%s%s',
'search': '%s/advanced_search/?page=%d&with=%s&s_cat=%d&seeds_from=1'
}
http_time_between_calls = 1 # Seconds
proxy_list = [
'http://extratorrent.cc'
]
def buildUrl(self, *args, **kwargs):
media = kwargs.get('media', None)
title = kwargs.get('title', None)
page = kwargs.get('page', 1)
if not title and media:
title = fireEvent('library.query', media, single = True)
if not title:
return False
assert isinstance(page, (int, long))
return self.urls['search'] % (self.getDomain(), page, tryUrlencode(title), self.category)
def _searchOnTitle(self, title, media, quality, results):
page = 1
pages = self.max_pages
while page <= pages:
url = self.buildUrl(title=title, media=media, page=page)
data = self.getHTMLData(url)
try:
html = BeautifulSoup(data)
if page == 1:
matches = re.search('total .b.([0-9]+)..b. torrents found', data, re.MULTILINE)
torrents_total = tryFloat(matches.group(1))
option = html.find('select', attrs={'name': 'torr_cat'}).find('option', attrs={'selected': 'selected'})
torrents_per_page = tryFloat(option.text)
pages = math.ceil(torrents_total / torrents_per_page)
if self.max_pages < pages:
pages = self.max_pages
for tr in html.find_all('tr', attrs={'class': ['tlr', 'tlz']}):
result = { }
field = self.field_link
for td in tr.find_all('td'):
if field == self.field_link:
a = td.find('a', title=re.compile('^download ', re.IGNORECASE))
result['url'] = self.urls['url'] % (self.getDomain(), a.get('href'))
elif field == self.field_name:
a = None
for a in td.find_all('a', title=re.compile('^view ', re.IGNORECASE)): pass
if a:
result['id'] = re.search('/torrent/(?P<id>\d+)/', a.get('href')).group('id')
result['name'] = a.text
result['detail_url'] = self.urls['detail'] % (self.getDomain(), a.get('href'))
elif field == self.field_size:
result['size'] = self.parseSize(td.text)
elif field == self.field_seeders:
result['seeders'] = tryInt(td.text)
elif field == self.field_leechers:
result['leechers'] = tryInt(td.text)
field += 1
# /for
if all(key in result for key in ('url', 'id', 'name', 'detail_url', 'size', 'seeders', 'leechers')):
results.append(result)
# /for
except:
log.error('Failed parsing results from ExtraTorrent: %s', traceback.format_exc())
break
page += 1
# /while
config = [{
'name': 'extratorrent',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'ExtraTorrent',
'description': '<a href="http://extratorrent.cc/">ExtraTorrent</a>',
'wizard': True,
'icon': 'AAABAAEAEBAAAAEAIABoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAQAQAAAAAAAAAAAAAAAAAAAAAAADIvb7/xry9/8e8vf/Ivb7/yL2+/8i9vv/Ivb7/yL2+/8i9vv/Ivb7/yL2+/8i9vv/Ivb7/x7y9/8a8vf/Ivb7/xry9//7+/v/+/v7//v7+//7+/v/+/v7//v7+//7+/v/+/v7//v7+//7+/v/+/v7//v7+//7+/v/+/v7/xry9/8e8vf/8/Pz//Pz8/5OTkv9qSRj/akkY/2tKGf9rShn/a0oZ/2tKGP9qSRj/Tzoc/4iEff/8/Pz//Pz8/8e8vf/HvL3/+/v7//v7+/+Hf3P/4bd0/+G3dP/ht3T/4bd0/+G3dP/ht3T/4bd0/3hgOv9xbmr/+/v7//v7+//HvL3/x7y9//f4+P/3+Pj/hHxx/+zAfP/swHz/7MB8/3xzaP9yal3/cmpd/3JqXf98c2b/xcPB//f4+P/3+Pj/x7y9/8e8vf/29vb/9vb2/4V9cf/txYX/7cWF/2RSNv/Nz8//zs/R/87P0P/Mzs7/29va//f39//29vb/9vb2/8e8vf/HvL3/8/Pz//Pz8/+IgHP/78yU/+/MlP94YDr/8vLy//Ly8v/y8vL/8vLy//Ly8v/09PT/8/T0//Lz8//HvL3/x7y9//Ly8v/y8vL/h4F5/+/Pnf/vz53/kHdP/3hgOv94YDr/eWE7/3hgOv9USDf/5eLd//Ly8v/y8vL/x7y9/8e8vf/w8PD/8PDw/4eBef/t0Kf/7dCn/+3Qp//t0Kf/7dCn/+3Qp//t0Kf/VEg3/9vb2v/w7+//8PDw/8e8vf/HvL3/7u3u/+7t7v+Gg33/7dm8/+3ZvP/Txa7/cW5r/3Fua/9xbmv/d3Rv/62rqf/o5+f/7u3u/+7t7v/HvL3/x7y9/+vr6//r6+v/h4F4/+/l0P/v5dD/XFVM/8XHx//Fx8f/xcfH/8TGxv/r7Ov/6+vr/+vr6//r6+v/x7y9/8e8vf/p6un/6erp/4eAeP/58+b/+fPm/4iEff93dG7/fXl0/356df99eXT/bGlj/5OTkf/p6un/6erp/8e8vf/HvL3/6Ofn/+fn5/+GgHj/5uPd/+bj3f/m493/5uPd/+bj3f/m493/5uPd/3Z2df9vb2//6Ofn/+fn5//HvL3/x7y9/+fn5//n5+f/raqo/3Z2df94eHj/d3d2/3h4ef95eXn/eHh4/3h4eP94eHj/ramo/+fn5//n5+f/x7y9/8a8vf/l5eX/5eXl/+Xl5f/l5eX/5eXl/+Xl5f/l5eX/5eXl/+Xl5f/l5eX/5eXl/+Xl5f/l5eX/5eXl/8a8vf/Ivb7/xry9/8e8vf/Ivb7/yL2+/8i9vv/Ivb7/yL2+/8i9vv/Ivb7/yL2+/8i9vv/Ivb7/x7y9/8a8vf/Ivb7/AAD//wAA//8AAP//AAD//wAA//8AAP//AAD//wAA//8AAP//AAD//wAA//8AAP//AAD//wAA//8AAP//AAD//w==',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 0,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

152
couchpotato/core/media/_base/providers/torrent/kickasstorrents.py

@ -2,9 +2,11 @@ import re
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.variable import tryInt, getIdentifier
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.event import fireEvent
from couchpotato.core.media._base.providers.torrent.base import TorrentMagnetProvider
from couchpotato.core.helpers.encoding import tryUrlencode
log = CPLog(__name__)
@ -12,9 +14,20 @@ log = CPLog(__name__)
class Base(TorrentMagnetProvider):
COLUMN_NAME = 0
COLUMN_SIZE = 1
COLUMN_FILES = 2 # Unused
COLUMN_AGE = 3
COLUMN_SEEDS = 4
COLUMN_LEECHERS = 5
MAX_PAGES = 2
# The url for the first page containing search results is not postfixed
# with a page number, but providing it is allowed.
urls = {
'detail': '%s/%s',
'search': '%s/%s-i%s/',
'detail': '%s/%%s',
'search': '%s/usearch/%s/%d/',
}
cat_ids = [
@ -24,6 +37,7 @@ class Base(TorrentMagnetProvider):
(['x264', '720p', '1080p', 'blu-ray', 'hdrip'], ['bd50', '1080p', '720p', 'brrip']),
(['dvdrip'], ['dvdrip']),
(['dvd'], ['dvdr']),
(['hdtv'], ['hdtv'])
]
http_time_between_calls = 1 # Seconds
@ -35,64 +49,105 @@ class Base(TorrentMagnetProvider):
'https://katproxy.com',
]
def _search(self, media, quality, results):
data = self.getHTMLData(self.urls['search'] % (self.getDomain(), 'm', getIdentifier(media).replace('tt', '')))
if data:
def _searchOnTitle(self, title, media, quality, results):
# _searchOnTitle can be safely implemented here because the existence
# of a _search method on the provider is checked first, in which case
# the KickassTorrents movie provider searches for the movie using the
# IMDB identifier as a key.
cat_ids = self.getCatId(quality)
table_order = ['name', 'size', None, 'age', 'seeds', 'leechers']
base_detail_url = self.urls['detail'] % (self.getDomain())
page = 1
pages = 1
referer_url = None
while page <= pages and page <= self.MAX_PAGES:
# The use of buildUrl might be required in the future to scan
# multiple pages of show results.
url = self.buildUrl(title = title, media = media, page = page)
if url and referer_url and url == referer_url:
break
data = self.getHTMLData(url)
try:
html = BeautifulSoup(data)
resultdiv = html.find('div', attrs = {'class': 'tabs'})
for result in resultdiv.find_all('div', recursive = False):
if result.get('id').lower().strip('tab-') not in cat_ids:
continue
table = html.find('table', attrs = {'class': 'data'})
for tr in table.find_all('tr', attrs={'class': ['odd', 'even']}):
try:
for temp in result.find_all('tr'):
if temp['class'] is 'firstr' or not temp.get('id'):
continue
new = {}
nr = 0
for td in temp.find_all('td'):
column_name = table_order[nr]
if column_name:
if column_name == 'name':
link = td.find('div', {'class': 'torrentname'}).find_all('a')[2]
new['id'] = temp.get('id')[-7:]
new['name'] = link.text
new['url'] = td.find('a', {'href': re.compile('magnet:*')})['href']
new['detail_url'] = self.urls['detail'] % (self.getDomain(), link['href'][1:])
new['verified'] = True if td.find('a', 'iverify') else False
new['score'] = 100 if new['verified'] else 0
elif column_name is 'size':
new['size'] = self.parseSize(td.text)
elif column_name is 'age':
new['age'] = self.ageToDays(td.text)
elif column_name is 'seeds':
new['seeders'] = tryInt(td.text)
elif column_name is 'leechers':
new['leechers'] = tryInt(td.text)
nr += 1
# Only store verified torrents
if self.conf('only_verified') and not new['verified']:
continue
results.append(new)
result = { }
column = 0
for td in tr.find_all('td'):
if column == self.COLUMN_NAME:
link = td.find('a', 'cellMainLink')
for tag in link.findAll(True):
tag.unwrap()
result['id'] = tr['id'][-7:]
result['name'] = link.text
result['url'] = td.find('a', 'imagnet')['href']
result['detail_url'] = base_detail_url % (link['href'][1:])
if td.find('a', 'iverify'):
result['verified'] = True
result['score'] = 100
else:
result['verified'] = False
result['score'] = 0
elif column == self.COLUMN_SIZE:
result['size'] = self.parseSize(td.text)
elif column == self.COLUMN_AGE:
result['age'] = self.ageToDays(td.text)
elif column == self.COLUMN_SEEDS:
result['seeders'] = tryInt(td.text, 0)
elif column == self.COLUMN_LEECHERS:
result['leechers'] = tryInt(td.text, 0)
column += 1
if result:
# The name must at least contain one category identifier
score = 0
for cat_id in cat_ids:
if cat_id.lower() in result['name'].lower():
score += 1
break
if result['verified'] or not self.conf('only_verified'):
score += 1
if score == 2:
results.append(result)
buttons = html.find('div', 'pages')
if buttons:
pages = len(buttons.find_all(True, recursive = False))
except:
log.error('Failed parsing KickAssTorrents: %s', traceback.format_exc())
page += 1
referer_url = url
except AttributeError:
log.debug('No search results found.')
def buildUrl(self, *args, **kwargs):
# KickassTorrents also supports the "season:X episode:Y" parameters
# which would arguably make the search more robust, but we cannot use
# this mechanism because it might break searching for daily talk shows
# and the like, e.g. Jimmy Fallon.
media = kwargs.get('media', None)
title = kwargs.get('title', None)
page = kwargs.get('page', 1)
if not title and media:
title = fireEvent('library.query', media, single = True)
if not title:
return False
assert isinstance(page, (int, long))
return self.urls['search'] % (self.getDomain(), tryUrlencode(title), page)
def ageToDays(self, age_str):
age = 0
age_str = age_str.replace('&nbsp;', ' ')
@ -119,7 +174,6 @@ class Base(TorrentMagnetProvider):
def correctProxy(self, data):
return 'search query' in data.lower()
config = [{
'name': 'kickasstorrents',
'groups': [

7
couchpotato/core/media/_base/quality/__init__.py

@ -0,0 +1,7 @@
from .main import Quality
def autoload():
return Quality()
config = []

185
couchpotato/core/media/_base/quality/base.py

@ -0,0 +1,185 @@
import traceback
from CodernityDB.database import RecordNotFound
from couchpotato import get_db
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.encoding import toUnicode, ss
from couchpotato.core.helpers.variable import mergeDicts, getExt, tryInt, splitString
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
log = CPLog(__name__)
class QualityBase(Plugin):
type = None
properties = {}
qualities = []
pre_releases = ['cam', 'ts', 'tc', 'r5', 'scr']
threed_tags = {
'sbs': [('half', 'sbs'), 'hsbs', ('full', 'sbs'), 'fsbs'],
'ou': [('half', 'ou'), 'hou', ('full', 'ou'), 'fou'],
'3d': ['2d3d', '3d2d', '3d'],
}
cached_qualities = None
cached_order = None
def __init__(self):
addEvent('quality.pre_releases', self.preReleases)
addEvent('quality.get', self.get)
addEvent('quality.all', self.all)
addEvent('quality.reset_cache', self.resetCache)
addEvent('quality.fill', self.fill)
addEvent('quality.isfinish', self.isFinish)
addEvent('quality.ishigher', self.isHigher)
addEvent('app.initialize', self.fill, priority = 10)
self.order = []
for q in self.qualities:
self.order.append(q.get('identifier'))
def preReleases(self, types = None):
if types and self.type not in types:
return
return self.pre_releases
def get(self, identifier, types = None):
if types and self.type not in types:
return
for q in self.qualities:
if identifier == q.get('identifier'):
return q
def all(self, types = None):
if types and self.type not in types:
return
if self.cached_qualities:
return self.cached_qualities
db = get_db()
temp = []
for quality in self.qualities:
quality_doc = db.get('quality', quality.get('identifier'), with_doc = True)['doc']
q = mergeDicts(quality, quality_doc)
temp.append(q)
if len(temp) == len(self.qualities):
self.cached_qualities = temp
return temp
def expand(self, quality):
for key, options in self.properties.items():
if key not in quality:
continue
quality[key] = [self.getProperty(key, identifier) for identifier in quality[key]]
return quality
def getProperty(self, key, identifier):
if key not in self.properties:
return
for item in self.properties[key]:
if item.get('identifier') == identifier:
return item
def resetCache(self):
self.cached_qualities = None
def fill(self):
try:
db = get_db()
order = 0
for q in self.qualities:
existing = None
try:
existing = db.get('quality', q.get('identifier'))
except RecordNotFound:
pass
if not existing:
db.insert({
'_t': 'quality',
'order': order,
'identifier': q.get('identifier'),
'size_min': tryInt(q.get('size')[0]),
'size_max': tryInt(q.get('size')[1]),
})
log.info('Creating profile: %s', q.get('label'))
db.insert({
'_t': 'profile',
'order': order + 20, # Make sure it goes behind other profiles
'core': True,
'qualities': [q.get('identifier')],
'label': toUnicode(q.get('label')),
'finish': [True],
'wait_for': [0],
})
order += 1
return True
except:
log.error('Failed: %s', traceback.format_exc())
return False
def isFinish(self, quality, profile, release_age = 0):
if not isinstance(profile, dict) or not profile.get('qualities'):
# No profile so anything (scanned) is good enough
return True
try:
index = [i for i, identifier in enumerate(profile['qualities']) if identifier == quality['identifier'] and bool(profile['3d'][i] if profile.get('3d') else False) == bool(quality.get('is_3d', False))][0]
if index == 0 or (profile['finish'][index] and int(release_age) >= int(profile.get('stop_after', [0])[0])):
return True
return False
except:
return False
def isHigher(self, quality, compare_with, profile = None):
if not isinstance(profile, dict) or not profile.get('qualities'):
profile = fireEvent('profile.default', single = True)
# Try to find quality in profile, if not found: a quality we do not want is lower than anything else
try:
quality_order = [i for i, identifier in enumerate(profile['qualities']) if identifier == quality['identifier'] and bool(profile['3d'][i] if profile.get('3d') else 0) == bool(quality.get('is_3d', 0))][0]
except:
log.debug('Quality %s not found in profile identifiers %s', (quality['identifier'] + (' 3D' if quality.get('is_3d', 0) else ''), \
[identifier + (' 3D' if (profile['3d'][i] if profile.get('3d') else 0) else '') for i, identifier in enumerate(profile['qualities'])]))
return 'lower'
# Try to find compare quality in profile, if not found: anything is higher than a not wanted quality
try:
compare_order = [i for i, identifier in enumerate(profile['qualities']) if identifier == compare_with['identifier'] and bool(profile['3d'][i] if profile.get('3d') else 0) == bool(compare_with.get('is_3d', 0))][0]
except:
log.debug('Compare quality %s not found in profile identifiers %s', (compare_with['identifier'] + (' 3D' if compare_with.get('is_3d', 0) else ''), \
[identifier + (' 3D' if (profile['3d'][i] if profile.get('3d') else 0) else '') for i, identifier in enumerate(profile['qualities'])]))
return 'higher'
# Note to self: a lower number means higher quality
if quality_order > compare_order:
return 'lower'
elif quality_order == compare_order:
return 'equal'
else:
return 'higher'

0
couchpotato/core/plugins/quality/index.py → couchpotato/core/media/_base/quality/index.py

82
couchpotato/core/media/_base/quality/main.py

@ -0,0 +1,82 @@
import traceback
from couchpotato import fireEvent, get_db, tryInt, CPLog
from couchpotato.api import addApiView
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.variable import splitString, mergeDicts
from couchpotato.core.media._base.quality.index import QualityIndex
from couchpotato.core.plugins.base import Plugin
log = CPLog(__name__)
class Quality(Plugin):
_database = {
'quality': QualityIndex
}
def __init__(self):
addEvent('quality.single', self.single)
addApiView('quality.list', self.allView, docs = {
'desc': 'List all available qualities',
'params': {
'type': {'type': 'string', 'desc': 'Media type to filter on.'},
},
'return': {'type': 'object', 'example': """{
'success': True,
'list': array, qualities
}"""}
})
addApiView('quality.size.save', self.saveSize)
def single(self, identifier = '', types = None):
db = get_db()
quality = db.get('quality', identifier, with_doc = True)['doc']
if quality:
return mergeDicts(
fireEvent(
'quality.get',
quality['identifier'],
types = types,
single = True
),
quality
)
return {}
def allView(self, **kwargs):
return {
'success': True,
'list': fireEvent(
'quality.all',
types = splitString(kwargs.get('type')),
merge = True
)
}
def saveSize(self, **kwargs):
try:
db = get_db()
quality = db.get('quality', kwargs.get('identifier'), with_doc = True)
if quality:
quality['doc'][kwargs.get('value_type')] = tryInt(kwargs.get('value'))
db.update(quality['doc'])
fireEvent('quality.reset_cache')
return {
'success': True
}
except:
log.error('Failed: %s', traceback.format_exc())
return {
'success': False
}

0
couchpotato/core/plugins/quality/static/quality.js → couchpotato/core/media/_base/quality/static/quality.js

185
couchpotato/core/media/_base/searcher/main.py

@ -1,12 +1,15 @@
import datetime
import re
import time
from couchpotato import get_db
from couchpotato.api import addApiView
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.encoding import simplifyString
from couchpotato.core.helpers.variable import splitString, removeEmpty, removeDuplicate
from couchpotato.core.helpers.variable import splitString, removeEmpty, removeDuplicate, getTitle, tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.searcher.base import SearcherBase
from couchpotato.environment import Env
log = CPLog(__name__)
@ -16,12 +19,6 @@ class Searcher(SearcherBase):
# noinspection PyMissingConstructor
def __init__(self):
addEvent('searcher.protocols', self.getSearchProtocols)
addEvent('searcher.contains_other_quality', self.containsOtherQuality)
addEvent('searcher.correct_3d', self.correct3D)
addEvent('searcher.correct_year', self.correctYear)
addEvent('searcher.correct_name', self.correctName)
addEvent('searcher.correct_words', self.correctWords)
addEvent('searcher.search', self.search)
addApiView('searcher.full_search', self.searchAllView, docs = {
@ -84,13 +81,14 @@ class Searcher(SearcherBase):
return search_protocols
def containsOtherQuality(self, nzb, movie_year = None, preferred_quality = None):
def containsOtherQuality(self, nzb, movie_year = None, preferred_quality = None, types = None):
if not preferred_quality: preferred_quality = {}
found = {}
# Try guessing via quality tags
guess = fireEvent('quality.guess', files = [nzb.get('name')], size = nzb.get('size', None), single = True)
guess = fireEvent('quality.guess', files = [nzb.get('name')], size = nzb.get('size', None), types = types, single = True)
if guess:
found[guess['identifier']] = True
@ -111,7 +109,7 @@ class Searcher(SearcherBase):
found['dvdrip'] = True
# Allow other qualities
for allowed in preferred_quality.get('allow'):
for allowed in preferred_quality.get('allow', []):
if found.get(allowed):
del found[allowed]
@ -120,14 +118,14 @@ class Searcher(SearcherBase):
return found
def correct3D(self, nzb, preferred_quality = None):
def correct3D(self, nzb, preferred_quality = None, types = None):
if not preferred_quality: preferred_quality = {}
if not preferred_quality.get('custom'): return
threed = preferred_quality['custom'].get('3d')
# Try guessing via quality tags
guess = fireEvent('quality.guess', [nzb.get('name')], single = True)
guess = fireEvent('quality.guess', [nzb.get('name')], types = types, single = True)
if guess:
return threed == guess.get('is_3d')
@ -223,5 +221,168 @@ class Searcher(SearcherBase):
return True
def correctRelease(self, nzb = None, media = None, quality = None, **kwargs):
raise NotImplementedError
def couldBeReleased(self, is_pre_release, dates, media):
raise NotImplementedError
def getTitle(self, media):
return getTitle(media)
def getProfileId(self, media):
# Required because the profile_id for an show episode is stored with
# the show, not the episode.
raise NotImplementedError
def single(self, media, search_protocols = None, manual = False, force_download = False, notify = True):
# Find out search type
try:
if not search_protocols:
search_protocols = self.getSearchProtocols()
except SearchSetupError:
return
db = get_db()
profile = db.get('id', self.getProfileId(media))
if not profile or (media['status'] == 'done' and not manual):
log.debug('Media does not have a profile or already done, assuming in manage tab.')
fireEvent('media.restatus', media['_id'], single = True)
return
default_title = self.getTitle(media)
if not default_title:
log.error('No proper info found for media, removing it from library to stop it from causing more issues.')
fireEvent('media.delete', media['_id'], single = True)
return
# Update media status and check if it is still not done (due to the stop searching after feature
if fireEvent('media.restatus', media['_id'], single = True) == 'done':
log.debug('No better quality found, marking media %s as done.', default_title)
pre_releases = fireEvent('quality.pre_releases', single = True)
release_dates = fireEvent('media.update_release_dates', media['_id'], merge = True)
found_releases = []
previous_releases = media.get('releases', [])
too_early_to_search = []
outside_eta_results = 0
always_search = self.conf('always_search')
ignore_eta = manual
total_result_count = 0
if notify:
fireEvent('notify.frontend', type = '%s.searcher.started' % self._type, data = {'_id': media['_id']}, message = 'Searching for "%s"' % default_title)
# Ignore eta once every 7 days
if not always_search:
prop_name = 'last_ignored_eta.%s' % media['_id']
last_ignored_eta = float(Env.prop(prop_name, default = 0))
if last_ignored_eta < time.time() - 604800:
ignore_eta = True
Env.prop(prop_name, value = time.time())
ret = False
for index, q_identifier in enumerate(profile.get('qualities', [])):
quality_custom = {
'index': index,
'quality': q_identifier,
'finish': profile['finish'][index],
'wait_for': tryInt(profile['wait_for'][index]),
'3d': profile['3d'][index] if profile.get('3d') else False,
'minimum_score': profile.get('minimum_score', 1),
}
could_not_be_released = not self.couldBeReleased(q_identifier in pre_releases, release_dates, media)
if not always_search and could_not_be_released:
too_early_to_search.append(q_identifier)
# Skip release, if ETA isn't ignored
if not ignore_eta:
continue
has_better_quality = 0
# See if better quality is available
for release in media.get('releases', []):
if release['status'] not in ['available', 'ignored', 'failed']:
is_higher = fireEvent('quality.ishigher', \
{'identifier': q_identifier, 'is_3d': quality_custom.get('3d', 0)}, \
{'identifier': release['quality'], 'is_3d': release.get('is_3d', 0)}, \
profile, single = True)
if is_higher != 'higher':
has_better_quality += 1
# Don't search for quality lower then already available.
if has_better_quality > 0:
log.info('Better quality (%s) already available or snatched for %s', (q_identifier, default_title))
fireEvent('media.restatus', media['_id'], single = True)
break
quality = fireEvent('quality.single', identifier = q_identifier, single = True)
log.info('Search for %s in %s%s', (default_title, quality['label'], ' ignoring ETA' if always_search or ignore_eta else ''))
# Extend quality with profile customs
quality['custom'] = quality_custom
results = fireEvent('searcher.search', search_protocols, media, quality, single = True) or []
# Check if media isn't deleted while searching
if not fireEvent('media.get', media.get('_id'), single = True):
break
# Add them to this media releases list
found_releases += fireEvent('release.create_from_search', results, media, quality, single = True)
results_count = len(found_releases)
total_result_count += results_count
if results_count == 0:
log.debug('Nothing found for %s in %s', (default_title, quality['label']))
# Keep track of releases found outside ETA window
outside_eta_results += results_count if could_not_be_released else 0
# Don't trigger download, but notify user of available releases
if could_not_be_released and results_count > 0:
log.debug('Found %s releases for "%s", but ETA isn\'t correct yet.', (results_count, default_title))
# Try find a valid result and download it
if (force_download or not could_not_be_released or always_search) and fireEvent('release.try_download_result', results, media, quality_custom, single = True):
ret = True
# Remove releases that aren't found anymore
temp_previous_releases = []
for release in previous_releases:
if release.get('status') == 'available' and release.get('identifier') not in found_releases:
fireEvent('release.delete', release.get('_id'), single = True)
else:
temp_previous_releases.append(release)
previous_releases = temp_previous_releases
del temp_previous_releases
# Break if CP wants to shut down
if self.shuttingDown() or ret:
break
if total_result_count > 0:
fireEvent('media.tag', media['_id'], 'recent', update_edited = True, single = True)
if len(too_early_to_search) > 0:
log.info2('Too early to search for %s, %s', (too_early_to_search, default_title))
if outside_eta_results > 0:
message = 'Found %s releases for "%s" before ETA. Select and download via the dashboard.' % (outside_eta_results, default_title)
log.info(message)
if not manual:
fireEvent('media.available', message = message, data = {})
if notify:
fireEvent('notify.frontend', type = '%s.searcher.ended' % self._type, data = {'_id': media['_id']})
return ret
class SearchSetupError(Exception):
pass

10
couchpotato/core/media/movie/_base/static/list.js

@ -2,6 +2,9 @@ var MovieList = new Class({
Implements: [Events, Options],
media_type: 'movie',
list_key: 'movies',
options: {
api_call: 'media.list',
navigation: true,
@ -598,7 +601,7 @@ var MovieList = new Class({
Api.request(self.options.api_call, {
'data': Object.merge({
'type': self.options.type || 'movie',
'type': self.media_type || 'movie',
'status': self.options.status,
'limit_offset': self.options.limit ? self.options.limit + ',' + self.offset : null
}, self.filter),
@ -619,8 +622,9 @@ var MovieList = new Class({
self.el.setStyle('min-height', null);
}
self.store(json.movies);
self.addMovies(json.movies, json.total || json.movies.length);
var items = json[self.list_key] || [];
self.store(items);
self.addMovies(items, json.total || items.length);
if(self.scrollspy) {
self.load_more.set('text', 'load more movies');
self.scrollspy.start();

4
couchpotato/core/media/movie/_base/static/search.js

@ -2,6 +2,8 @@ var BlockSearchMovieItem = new Class({
Implements: [Options, Events],
media_type: 'movie',
initialize: function(info, options){
var self = this;
self.setOptions(options);
@ -113,7 +115,7 @@ var BlockSearchMovieItem = new Class({
self.loadingMask();
Api.request('movie.add', {
Api.request(self.media_type + '.add', {
'data': {
'identifier': self.info.imdb,
'title': self.title_select.get('value'),

12
couchpotato/core/media/movie/providers/torrent/extratorrent.py

@ -0,0 +1,12 @@
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.extratorrent import Base
from couchpotato.core.media.movie.providers.base import MovieProvider
log = CPLog(__name__)
autoload = 'ExtraTorrent'
class ExtraTorrent(MovieProvider, Base):
category = 4

73
couchpotato/core/media/movie/providers/torrent/kickasstorrents.py

@ -1,3 +1,7 @@
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.variable import tryInt, getIdentifier
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.kickasstorrents import Base
from couchpotato.core.media.movie.providers.base import MovieProvider
@ -8,4 +12,71 @@ autoload = 'KickAssTorrents'
class KickAssTorrents(MovieProvider, Base):
pass
urls = {
'detail': '%s/%s',
'search': '%s/%s-i%s/',
}
cat_ids = [
(['cam'], ['cam']),
(['telesync'], ['ts', 'tc']),
(['screener', 'tvrip'], ['screener']),
(['x264', '720p', '1080p', 'blu-ray', 'hdrip'], ['bd50', '1080p', '720p', 'brrip']),
(['dvdrip'], ['dvdrip']),
(['dvd'], ['dvdr']),
]
def _search(self, media, quality, results):
data = self.getHTMLData(self.urls['search'] % (self.getDomain(), 'm', getIdentifier(media).replace('tt', '')))
if data:
cat_ids = self.getCatId(quality)
try:
html = BeautifulSoup(data)
resultdiv = html.find('div', attrs = {'class': 'tabs'})
for result in resultdiv.find_all('div', recursive = False):
if result.get('id').lower().strip('tab-') not in cat_ids:
continue
try:
for temp in result.find_all('tr'):
if temp['class'] is 'firstr' or not temp.get('id'):
continue
new = {}
column = 0
for td in temp.find_all('td'):
if column == self.COLUMN_NAME:
link = td.find('div', {'class': 'torrentname'}).find_all('a')[2]
new['id'] = temp.get('id')[-7:]
new['name'] = link.text
new['url'] = td.find('a', 'imagnet')['href']
new['detail_url'] = self.urls['detail'] % (self.getDomain(), link['href'][1:])
new['verified'] = True if td.find('a', 'iverify') else False
new['score'] = 100 if new['verified'] else 0
elif column == self.COLUMN_SIZE:
new['size'] = self.parseSize(td.text)
elif column == self.COLUMN_AGE:
new['age'] = self.ageToDays(td.text)
elif column == self.COLUMN_SEEDS:
new['seeders'] = tryInt(td.text)
elif column == self.COLUMN_LEECHERS:
new['leechers'] = tryInt(td.text)
column += 1
# Only store verified torrents
if self.conf('only_verified') and not new['verified']:
continue
results.append(new)
except:
log.error('Failed parsing KickAssTorrents: %s', traceback.format_exc())
except AttributeError:
log.debug('No search results found.')

0
couchpotato/core/media/movie/quality/__init__.py

254
couchpotato/core/plugins/quality/main.py → couchpotato/core/media/movie/quality/main.py

@ -1,199 +1,51 @@
from math import fabs, ceil
import traceback
import re
from CodernityDB.database import RecordNotFound
from couchpotato import get_db
from couchpotato.api import addApiView
from couchpotato import CPLog
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.encoding import toUnicode, ss
from couchpotato.core.helpers.variable import mergeDicts, getExt, tryInt, splitString, tryFloat
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.core.plugins.quality.index import QualityIndex
from couchpotato.core.helpers.encoding import ss
from couchpotato.core.helpers.variable import getExt, splitString, tryFloat
from couchpotato.core.media._base.quality.base import QualityBase
from math import ceil, fabs
log = CPLog(__name__)
autoload = 'MovieQuality'
class QualityPlugin(Plugin):
_database = {
'quality': QualityIndex
}
class MovieQuality(QualityBase):
type = 'movie'
qualities = [
{'identifier': 'bd50', 'hd': True, 'allow_3d': True, 'size': (20000, 60000), 'median_size': 40000, 'label': 'BR-Disk', 'alternative': ['bd25', ('br', 'disk')], 'allow': ['1080p'], 'ext':['iso', 'img'], 'tags': ['bdmv', 'certificate', ('complete', 'bluray'), 'avc', 'mvc']},
{'identifier': '1080p', 'hd': True, 'allow_3d': True, 'size': (4000, 20000), 'median_size': 10000, 'label': '1080p', 'width': 1920, 'height': 1080, 'alternative': [], 'allow': [], 'ext':['mkv', 'm2ts', 'ts'], 'tags': ['m2ts', 'x264', 'h264', '1080']},
{'identifier': '720p', 'hd': True, 'allow_3d': True, 'size': (3000, 10000), 'median_size': 5500, 'label': '720p', 'width': 1280, 'height': 720, 'alternative': [], 'allow': [], 'ext':['mkv', 'ts'], 'tags': ['x264', 'h264', '720']},
{'identifier': 'brrip', 'hd': True, 'allow_3d': True, 'size': (700, 7000), 'median_size': 2000, 'label': 'BR-Rip', 'alternative': ['bdrip', ('br', 'rip'), 'hdtv', 'hdrip'], 'allow': ['720p', '1080p'], 'ext':['mp4', 'avi'], 'tags': ['webdl', ('web', 'dl')]},
{'identifier': 'dvdr', 'size': (3000, 10000), 'median_size': 4500, 'label': 'DVD-R', 'alternative': ['br2dvd', ('dvd', 'r')], 'allow': [], 'ext':['iso', 'img', 'vob'], 'tags': ['pal', 'ntsc', 'video_ts', 'audio_ts', ('dvd', 'r'), 'dvd9']},
{'identifier': 'dvdrip', 'size': (600, 2400), 'median_size': 1500, 'label': 'DVD-Rip', 'width': 720, 'alternative': [('dvd', 'rip')], 'allow': [], 'ext':['avi'], 'tags': [('dvd', 'rip'), ('dvd', 'xvid'), ('dvd', 'divx')]},
{'identifier': 'scr', 'size': (600, 1600), 'median_size': 700, 'label': 'Screener', 'alternative': ['screener', 'dvdscr', 'ppvrip', 'dvdscreener', 'hdscr', 'webrip', ('web', 'rip')], 'allow': ['dvdr', 'dvdrip', '720p', '1080p'], 'ext':[], 'tags': []},
{'identifier': 'r5', 'size': (600, 1000), 'median_size': 700, 'label': 'R5', 'alternative': ['r6'], 'allow': ['dvdr', '720p', '1080p'], 'ext':[]},
{'identifier': 'tc', 'size': (600, 1000), 'median_size': 700, 'label': 'TeleCine', 'alternative': ['telecine'], 'allow': ['720p', '1080p'], 'ext':[]},
{'identifier': 'ts', 'size': (600, 1000), 'median_size': 700, 'label': 'TeleSync', 'alternative': ['telesync', 'hdts'], 'allow': ['720p', '1080p'], 'ext':[]},
{'identifier': 'cam', 'size': (600, 1000), 'median_size': 700, 'label': 'Cam', 'alternative': ['camrip', 'hdcam'], 'allow': ['720p', '1080p'], 'ext':[]}
{'identifier': 'bd50', 'hd': True, 'allow_3d': True, 'size': (20000, 60000), 'label': 'BR-Disk', 'alternative': ['bd25', ('br', 'disk')], 'allow': ['1080p'], 'ext':['iso', 'img'], 'tags': ['bdmv', 'certificate', ('complete', 'bluray'), 'avc', 'mvc']},
{'identifier': '1080p', 'hd': True, 'allow_3d': True, 'size': (4000, 20000), 'label': '1080p', 'width': 1920, 'height': 1080, 'alternative': [], 'allow': [], 'ext':['mkv', 'm2ts', 'ts'], 'tags': ['m2ts', 'x264', 'h264']},
{'identifier': '720p', 'hd': True, 'allow_3d': True, 'size': (3000, 10000), 'label': '720p', 'width': 1280, 'height': 720, 'alternative': [], 'allow': [], 'ext':['mkv', 'ts'], 'tags': ['x264', 'h264']},
{'identifier': 'brrip', 'hd': True, 'allow_3d': True, 'size': (700, 7000), 'label': 'BR-Rip', 'alternative': ['bdrip', ('br', 'rip')], 'allow': ['720p', '1080p'], 'ext':['mp4', 'avi'], 'tags': ['hdtv', 'hdrip', 'webdl', ('web', 'dl')]},
{'identifier': 'dvdr', 'size': (3000, 10000), 'label': 'DVD-R', 'alternative': ['br2dvd', ('dvd', 'r')], 'allow': [], 'ext':['iso', 'img', 'vob'], 'tags': ['pal', 'ntsc', 'video_ts', 'audio_ts', ('dvd', 'r'), 'dvd9']},
{'identifier': 'dvdrip', 'size': (600, 2400), 'label': 'DVD-Rip', 'width': 720, 'alternative': [('dvd', 'rip')], 'allow': [], 'ext':['avi'], 'tags': [('dvd', 'rip'), ('dvd', 'xvid'), ('dvd', 'divx')]},
{'identifier': 'scr', 'size': (600, 1600), 'label': 'Screener', 'alternative': ['screener', 'dvdscr', 'ppvrip', 'dvdscreener', 'hdscr'], 'allow': ['dvdr', 'dvdrip', '720p', '1080p'], 'ext':[], 'tags': ['webrip', ('web', 'rip')]},
{'identifier': 'r5', 'size': (600, 1000), 'label': 'R5', 'alternative': ['r6'], 'allow': ['dvdr', '720p'], 'ext':[]},
{'identifier': 'tc', 'size': (600, 1000), 'label': 'TeleCine', 'alternative': ['telecine'], 'allow': ['720p'], 'ext':[]},
{'identifier': 'ts', 'size': (600, 1000), 'label': 'TeleSync', 'alternative': ['telesync', 'hdts'], 'allow': ['720p'], 'ext':[]},
{'identifier': 'cam', 'size': (600, 1000), 'label': 'Cam', 'alternative': ['camrip', 'hdcam'], 'allow': ['720p'], 'ext':[]},
]
pre_releases = ['cam', 'ts', 'tc', 'r5', 'scr']
threed_tags = {
'sbs': [('half', 'sbs'), 'hsbs', ('full', 'sbs'), 'fsbs'],
'ou': [('half', 'ou'), 'hou', ('full', 'ou'), 'fou'],
'3d': ['2d3d', '3d2d', '3d'],
}
cached_qualities = None
cached_order = None
def __init__(self):
addEvent('quality.all', self.all)
addEvent('quality.single', self.single)
super(MovieQuality, self).__init__()
addEvent('quality.guess', self.guess)
addEvent('quality.pre_releases', self.preReleases)
addEvent('quality.order', self.getOrder)
addEvent('quality.ishigher', self.isHigher)
addEvent('quality.isfinish', self.isFinish)
addEvent('quality.fill', self.fill)
addApiView('quality.size.save', self.saveSize)
addApiView('quality.list', self.allView, docs = {
'desc': 'List all available qualities',
'return': {'type': 'object', 'example': """{
'success': True,
'list': array, qualities
}"""}
})
addEvent('app.initialize', self.fill, priority = 10)
addEvent('app.test', self.doTest)
self.order = []
self.addOrder()
def addOrder(self):
self.order = []
for q in self.qualities:
self.order.append(q.get('identifier'))
def getOrder(self):
return self.order
def preReleases(self):
return self.pre_releases
def allView(self, **kwargs):
return {
'success': True,
'list': self.all()
}
def all(self):
if self.cached_qualities:
return self.cached_qualities
db = get_db()
temp = []
for quality in self.qualities:
quality_doc = db.get('quality', quality.get('identifier'), with_doc = True)['doc']
q = mergeDicts(quality, quality_doc)
temp.append(q)
if len(temp) == len(self.qualities):
self.cached_qualities = temp
return temp
def single(self, identifier = ''):
db = get_db()
quality_dict = {}
quality = db.get('quality', identifier, with_doc = True)['doc']
if quality:
quality_dict = mergeDicts(self.getQuality(quality['identifier']), quality)
return quality_dict
def getQuality(self, identifier):
for q in self.qualities:
if identifier == q.get('identifier'):
return q
def saveSize(self, **kwargs):
try:
db = get_db()
quality = db.get('quality', kwargs.get('identifier'), with_doc = True)
if quality:
quality['doc'][kwargs.get('value_type')] = tryInt(kwargs.get('value'))
db.update(quality['doc'])
self.cached_qualities = None
return {
'success': True
}
except:
log.error('Failed: %s', traceback.format_exc())
return {
'success': False
}
def fill(self):
try:
db = get_db()
order = 0
for q in self.qualities:
existing = None
try:
existing = db.get('quality', q.get('identifier'))
except RecordNotFound:
pass
if not existing:
db.insert({
'_t': 'quality',
'order': order,
'identifier': q.get('identifier'),
'size_min': tryInt(q.get('size')[0]),
'size_max': tryInt(q.get('size')[1]),
})
log.info('Creating profile: %s', q.get('label'))
db.insert({
'_t': 'profile',
'order': order + 20, # Make sure it goes behind other profiles
'core': True,
'qualities': [q.get('identifier')],
'label': toUnicode(q.get('label')),
'finish': [True],
'wait_for': [0],
})
order += 1
def guess(self, files, extra = None, size = None, types = None):
if types and self.type not in types:
return
return True
except:
log.error('Failed: %s', traceback.format_exc())
return False
def guess(self, files, extra = None, size = None, use_cache = True):
if not extra: extra = {}
# Create hash for cache
cache_key = str([f.replace('.' + getExt(f), '') if len(getExt(f)) < 4 else f for f in files])
if use_cache:
#if use_cache:
if True:
cached = self.getCache(cache_key)
if cached and len(extra) == 0:
return cached
@ -363,11 +215,14 @@ class QualityPlugin(Plugin):
size_diff = size - size_min
size_proc = (size_diff / proc_range)
median_diff = quality['median_size'] - size_min
median_proc = (median_diff / proc_range)
#median_diff = quality['median_size'] - size_min
# FIXME: not sure this is the proper fix
average_diff = ((size_min + size_max) / 2) - size_min
average_proc = (average_diff / proc_range)
max_points = 8
score += ceil(max_points - (fabs(size_proc - median_proc) * max_points))
#score += ceil(max_points - (fabs(size_proc - median_proc) * max_points))
score += ceil(max_points - (fabs(size_proc - average_proc) * max_points))
else:
score -= 5
@ -399,49 +254,6 @@ class QualityPlugin(Plugin):
if quality.get('identifier') != q.get('identifier') and score.get(q.get('identifier')):
score[q.get('identifier')]['score'] -= 1
def isFinish(self, quality, profile, release_age = 0):
if not isinstance(profile, dict) or not profile.get('qualities'):
# No profile so anything (scanned) is good enough
return True
try:
index = [i for i, identifier in enumerate(profile['qualities']) if identifier == quality['identifier'] and bool(profile['3d'][i] if profile.get('3d') else False) == bool(quality.get('is_3d', False))][0]
if index == 0 or (profile['finish'][index] and int(release_age) >= int(profile.get('stop_after', [0])[0])):
return True
return False
except:
return False
def isHigher(self, quality, compare_with, profile = None):
if not isinstance(profile, dict) or not profile.get('qualities'):
profile = fireEvent('profile.default', single = True)
# Try to find quality in profile, if not found: a quality we do not want is lower than anything else
try:
quality_order = [i for i, identifier in enumerate(profile['qualities']) if identifier == quality['identifier'] and bool(profile['3d'][i] if profile.get('3d') else 0) == bool(quality.get('is_3d', 0))][0]
except:
log.debug('Quality %s not found in profile identifiers %s', (quality['identifier'] + (' 3D' if quality.get('is_3d', 0) else ''), \
[identifier + (' 3D' if (profile['3d'][i] if profile.get('3d') else 0) else '') for i, identifier in enumerate(profile['qualities'])]))
return 'lower'
# Try to find compare quality in profile, if not found: anything is higher than a not wanted quality
try:
compare_order = [i for i, identifier in enumerate(profile['qualities']) if identifier == compare_with['identifier'] and bool(profile['3d'][i] if profile.get('3d') else 0) == bool(compare_with.get('is_3d', 0))][0]
except:
log.debug('Compare quality %s not found in profile identifiers %s', (compare_with['identifier'] + (' 3D' if compare_with.get('is_3d', 0) else ''), \
[identifier + (' 3D' if (profile['3d'][i] if profile.get('3d') else 0) else '') for i, identifier in enumerate(profile['qualities'])]))
return 'higher'
# Note to self: a lower number means higher quality
if quality_order > compare_order:
return 'lower'
elif quality_order == compare_order:
return 'equal'
else:
return 'higher'
def doTest(self):
tests = {
@ -512,5 +324,3 @@ class QualityPlugin(Plugin):
return True
else:
log.error('Quality test failed: %s out of %s succeeded', (correct, len(tests)))

179
couchpotato/core/media/movie/searcher.py

@ -4,13 +4,13 @@ import re
import time
import traceback
from couchpotato import get_db
from couchpotato.api import addApiView
from couchpotato.core.event import addEvent, fireEvent, fireEventAsync
from couchpotato.core.helpers.encoding import simplifyString
from couchpotato.core.helpers.variable import getTitle, possibleTitles, getImdb, getIdentifier, tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.searcher.base import SearcherBase
from couchpotato.core.media._base.searcher.main import Searcher
from couchpotato.core.media._base.searcher.main import SearchSetupError
from couchpotato.core.media.movie import MovieTypeBase
from couchpotato.environment import Env
@ -20,7 +20,7 @@ log = CPLog(__name__)
autoload = 'MovieSearcher'
class MovieSearcher(SearcherBase, MovieTypeBase):
class MovieSearcher(Searcher, MovieTypeBase):
in_progress = False
@ -110,153 +110,6 @@ class MovieSearcher(SearcherBase, MovieTypeBase):
self.in_progress = False
def single(self, movie, search_protocols = None, manual = False, force_download = False):
# Find out search type
try:
if not search_protocols:
search_protocols = fireEvent('searcher.protocols', single = True)
except SearchSetupError:
return
if not movie['profile_id'] or (movie['status'] == 'done' and not manual):
log.debug('Movie doesn\'t have a profile or already done, assuming in manage tab.')
fireEvent('media.restatus', movie['_id'], single = True)
return
default_title = getTitle(movie)
if not default_title:
log.error('No proper info found for movie, removing it from library to stop it from causing more issues.')
fireEvent('media.delete', movie['_id'], single = True)
return
# Update media status and check if it is still not done (due to the stop searching after feature
if fireEvent('media.restatus', movie['_id'], single = True) == 'done':
log.debug('No better quality found, marking movie %s as done.', default_title)
pre_releases = fireEvent('quality.pre_releases', single = True)
release_dates = fireEvent('movie.update_release_dates', movie['_id'], merge = True)
found_releases = []
previous_releases = movie.get('releases', [])
too_early_to_search = []
outside_eta_results = 0
always_search = self.conf('always_search')
ignore_eta = manual
total_result_count = 0
fireEvent('notify.frontend', type = 'movie.searcher.started', data = {'_id': movie['_id']}, message = 'Searching for "%s"' % default_title)
# Ignore eta once every 7 days
if not always_search:
prop_name = 'last_ignored_eta.%s' % movie['_id']
last_ignored_eta = float(Env.prop(prop_name, default = 0))
if last_ignored_eta < time.time() - 604800:
ignore_eta = True
Env.prop(prop_name, value = time.time())
db = get_db()
profile = db.get('id', movie['profile_id'])
ret = False
for index, q_identifier in enumerate(profile.get('qualities', [])):
quality_custom = {
'index': index,
'quality': q_identifier,
'finish': profile['finish'][index],
'wait_for': tryInt(profile['wait_for'][index]),
'3d': profile['3d'][index] if profile.get('3d') else False,
'minimum_score': profile.get('minimum_score', 1),
}
could_not_be_released = not self.couldBeReleased(q_identifier in pre_releases, release_dates, movie['info']['year'])
if not always_search and could_not_be_released:
too_early_to_search.append(q_identifier)
# Skip release, if ETA isn't ignored
if not ignore_eta:
continue
has_better_quality = 0
# See if better quality is available
for release in movie.get('releases', []):
if release['status'] not in ['available', 'ignored', 'failed']:
is_higher = fireEvent('quality.ishigher', \
{'identifier': q_identifier, 'is_3d': quality_custom.get('3d', 0)}, \
{'identifier': release['quality'], 'is_3d': release.get('is_3d', 0)}, \
profile, single = True)
if is_higher != 'higher':
has_better_quality += 1
# Don't search for quality lower then already available.
if has_better_quality > 0:
log.info('Better quality (%s) already available or snatched for %s', (q_identifier, default_title))
fireEvent('media.restatus', movie['_id'], single = True)
break
quality = fireEvent('quality.single', identifier = q_identifier, single = True)
log.info('Search for %s in %s%s', (default_title, quality['label'], ' ignoring ETA' if always_search or ignore_eta else ''))
# Extend quality with profile customs
quality['custom'] = quality_custom
results = fireEvent('searcher.search', search_protocols, movie, quality, single = True) or []
# Check if movie isn't deleted while searching
if not fireEvent('media.get', movie.get('_id'), single = True):
break
# Add them to this movie releases list
found_releases += fireEvent('release.create_from_search', results, movie, quality, single = True)
results_count = len(found_releases)
total_result_count += results_count
if results_count == 0:
log.debug('Nothing found for %s in %s', (default_title, quality['label']))
# Keep track of releases found outside ETA window
outside_eta_results += results_count if could_not_be_released else 0
# Don't trigger download, but notify user of available releases
if could_not_be_released and results_count > 0:
log.debug('Found %s releases for "%s", but ETA isn\'t correct yet.', (results_count, default_title))
# Try find a valid result and download it
if (force_download or not could_not_be_released or always_search) and fireEvent('release.try_download_result', results, movie, quality_custom, single = True):
ret = True
# Remove releases that aren't found anymore
temp_previous_releases = []
for release in previous_releases:
if release.get('status') == 'available' and release.get('identifier') not in found_releases:
fireEvent('release.delete', release.get('_id'), single = True)
else:
temp_previous_releases.append(release)
previous_releases = temp_previous_releases
del temp_previous_releases
# Break if CP wants to shut down
if self.shuttingDown() or ret:
break
if total_result_count > 0:
fireEvent('media.tag', movie['_id'], 'recent', update_edited = True, single = True)
if len(too_early_to_search) > 0:
log.info2('Too early to search for %s, %s', (too_early_to_search, default_title))
if outside_eta_results > 0:
message = 'Found %s releases for "%s" before ETA. Select and download via the dashboard.' % (outside_eta_results, default_title)
log.info(message)
if not manual:
fireEvent('media.available', message = message, data = {})
fireEvent('notify.frontend', type = 'movie.searcher.ended', data = {'_id': movie['_id']})
return ret
def correctRelease(self, nzb = None, media = None, quality = None, **kwargs):
if media.get('type') != 'movie': return
@ -271,19 +124,23 @@ class MovieSearcher(SearcherBase, MovieTypeBase):
return False
# Check for required and ignored words
if not fireEvent('searcher.correct_words', nzb['name'], media, single = True):
if not self.correctWords(nzb['name'], media):
return False
preferred_quality = quality if quality else fireEvent('quality.single', identifier = quality['identifier'], single = True)
# Contains lower quality string
contains_other = fireEvent('searcher.contains_other_quality', nzb, movie_year = media['info']['year'], preferred_quality = preferred_quality, single = True)
if contains_other and isinstance(contains_other, dict):
contains_other = self.containsOtherQuality(
nzb, movie_year = media['info']['year'],
preferred_quality = preferred_quality,
types = [self._type])
if contains_other != False:
log.info2('Wrong: %s, looking for %s, found %s', (nzb['name'], quality['label'], [x for x in contains_other] if contains_other else 'no quality'))
return False
# Contains lower quality string
if not fireEvent('searcher.correct_3d', nzb, preferred_quality = preferred_quality, single = True):
# FIXME: media was passed instead of nzb here before
if not self.correct3D(nzb, preferred_quality = preferred_quality, types = [self._type]):
log.info2('Wrong: %s, %slooking for %s in 3D', (nzb['name'], ('' if preferred_quality['custom'].get('3d') else 'NOT '), quality['label']))
return False
@ -318,23 +175,24 @@ class MovieSearcher(SearcherBase, MovieTypeBase):
for movie_title in possibleTitles(raw_title):
movie_words = re.split('\W+', simplifyString(movie_title))
if fireEvent('searcher.correct_name', nzb['name'], movie_title, single = True):
if self.correctName(nzb['name'], movie_title):
# if no IMDB link, at least check year range 1
if len(movie_words) > 2 and fireEvent('searcher.correct_year', nzb['name'], media['info']['year'], 1, single = True):
if len(movie_words) > 2 and self.correctYear(nzb['name'], media['info']['year'], 1):
return True
# if no IMDB link, at least check year
if len(movie_words) <= 2 and fireEvent('searcher.correct_year', nzb['name'], media['info']['year'], 0, single = True):
if len(movie_words) <= 2 and self.correctYear(nzb['name'], media['info']['year'], 0):
return True
log.info("Wrong: %s, undetermined naming. Looking for '%s (%s)'", (nzb['name'], media_title, media['info']['year']))
return False
def couldBeReleased(self, is_pre_release, dates, year = None):
def couldBeReleased(self, is_pre_release, dates, media):
now = int(time.time())
now_year = date.today().year
now_month = date.today().month
year = media['info']['year']
if (year is None or year < now_year - 1 or (year <= now_year - 1 and now_month > 4)) and (not dates or (dates.get('theater', 0) == 0 and dates.get('dvd', 0) == 0)):
return True
@ -405,9 +263,10 @@ class MovieSearcher(SearcherBase, MovieTypeBase):
if media['type'] == 'movie':
return getTitle(media)
class SearchSetupError(Exception):
pass
def getProfileId(self, media):
assert media['type'] == 'movie'
return media.get('profile_id')
config = [{
'name': 'moviesearcher',

55
couchpotato/core/media/show/__init__.py

@ -0,0 +1,55 @@
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.media import MediaBase
autoload = 'ShowToggler'
class ShowToggler(MediaBase):
"""
TV Show support is EXPERIMENTAL and disabled by default. The "Shows" item
must only be visible if the user enabled it. This class notifies the
frontend if the shows.enabled configuration item changed.
FIXME: remove after TV Show support is considered stable.
"""
def __init__(self):
addEvent('setting.save.shows.enabled.after', self.toggleTab)
def toggleTab(self):
fireEvent('notify.frontend', type = 'shows.enabled', data = self.conf('enabled', section='shows'))
class ShowTypeBase(MediaBase):
_type = 'show'
def getType(self):
if hasattr(self, 'type') and self.type != self._type:
return '%s.%s' % (self._type, self.type)
return self._type
config = [{
'name': 'shows',
'groups': [
{
'tab': 'general',
'name': 'Shows',
'label': 'Shows',
'description': 'Enable EXPERIMENTAL TV Show support',
'options': [
{
'name': 'enabled',
'default': False,
'type': 'enabler',
},
{
'name': 'prefer_episode_releases',
'default': False,
'type': 'bool',
'label': 'Episode releases',
'description': 'Prefer episode releases over season packs',
},
],
},
],
}]

4
couchpotato/core/media/show/_base/__init__.py

@ -0,0 +1,4 @@
from .main import ShowBase
def autoload():
return ShowBase()

111
couchpotato/core/media/show/_base/episode.py

@ -0,0 +1,111 @@
from couchpotato import get_db
from couchpotato.core.event import addEvent, fireEvent, fireEventAsync
from couchpotato.core.logger import CPLog
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.media import MediaBase
log = CPLog(__name__)
autoload = 'Episode'
class Episode(MediaBase):
_type = 'show.episode'
def __init__(self):
addEvent('show.episode.add', self.add)
addEvent('show.episode.update', self.update)
addEvent('show.episode.update_extras', self.updateExtras)
def add(self, parent_id, info = None, update_after = True, status = None):
if not info: info = {}
identifiers = info.pop('identifiers', None)
if not identifiers:
log.warning('Unable to add episode, missing identifiers (info provider mismatch?)')
return
# Add Season
episode_info = {
'_t': 'media',
'type': 'show.episode',
'identifiers': identifiers,
'status': status if status else 'active',
'parent_id': parent_id,
'info': info, # Returned dict by providers
}
# Check if season already exists
existing_episode = fireEvent('media.with_identifiers', identifiers, with_doc = True, types = [self._type], single = True)
db = get_db()
if existing_episode:
s = existing_episode['doc']
s.update(episode_info)
episode = db.update(s)
else:
episode = db.insert(episode_info)
# Update library info
if update_after is not False:
handle = fireEventAsync if update_after is 'async' else fireEvent
handle('show.episode.update_extras', episode, info, store = True, single = True)
return episode
def update(self, media_id = None, identifiers = None, info = None):
if not info: info = {}
if self.shuttingDown():
return
db = get_db()
episode = db.get('id', media_id)
# Get new info
if not info:
season = db.get('id', episode['parent_id'])
show = db.get('id', season['parent_id'])
info = fireEvent(
'episode.info', show.get('identifiers'), {
'season_identifiers': season.get('identifiers'),
'season_number': season.get('info', {}).get('number'),
'episode_identifiers': episode.get('identifiers'),
'episode_number': episode.get('info', {}).get('number'),
'absolute_number': episode.get('info', {}).get('absolute_number')
},
merge = True
)
info['season_number'] = season.get('info', {}).get('number')
identifiers = info.pop('identifiers', None) or identifiers
# Update/create media
episode['identifiers'].update(identifiers)
episode.update({'info': info})
self.updateExtras(episode, info)
db.update(episode)
return episode
def updateExtras(self, episode, info, store=False):
db = get_db()
# Get images
image_urls = info.get('images', [])
existing_files = episode.get('files', {})
self.getPoster(image_urls, existing_files)
if store:
db.update(episode)

289
couchpotato/core/media/show/_base/main.py

@ -0,0 +1,289 @@
import time
import traceback
from couchpotato import get_db
from couchpotato.api import addApiView
from couchpotato.core.event import fireEvent, fireEventAsync, addEvent
from couchpotato.core.helpers.variable import getTitle, find
from couchpotato.core.logger import CPLog
from couchpotato.core.media import MediaBase
log = CPLog(__name__)
class ShowBase(MediaBase):
_type = 'show'
def __init__(self):
super(ShowBase, self).__init__()
self.initType()
addApiView('show.add', self.addView, docs = {
'desc': 'Add new show to the wanted list',
'params': {
'identifier': {'desc': 'IMDB id of the show your want to add.'},
'profile_id': {'desc': 'ID of quality profile you want the add the show in. If empty will use the default profile.'},
'category_id': {'desc': 'ID of category you want the add the show in.'},
'title': {'desc': 'Title of the show to use for search and renaming'},
}
})
addEvent('show.add', self.add)
addEvent('show.update', self.update)
addEvent('show.update_extras', self.updateExtras)
def addView(self, **kwargs):
add_dict = self.add(params = kwargs)
return {
'success': True if add_dict else False,
'show': add_dict,
}
def add(self, params = None, force_readd = True, search_after = True, update_after = True, notify_after = True, status = None):
if not params: params = {}
# Identifiers
if not params.get('identifiers'):
msg = 'Can\'t add show without at least 1 identifier.'
log.error(msg)
fireEvent('notify.frontend', type = 'show.no_identifier', message = msg)
return False
info = params.get('info')
if not info or (info and len(info.get('titles', [])) == 0):
info = fireEvent('show.info', merge = True, identifiers = params.get('identifiers'))
# Add Show
try:
m, added = self.create(info, params, force_readd, search_after, update_after)
result = fireEvent('media.get', m['_id'], single = True)
if added and notify_after:
if params.get('title'):
message = 'Successfully added "%s" to your wanted list.' % params.get('title', '')
else:
title = getTitle(m)
if title:
message = 'Successfully added "%s" to your wanted list.' % title
else:
message = 'Successfully added to your wanted list.'
fireEvent('notify.frontend', type = 'show.added', data = result, message = message)
return result
except:
log.error('Failed adding media: %s', traceback.format_exc())
def create(self, info, params = None, force_readd = True, search_after = True, update_after = True, notify_after = True, status = None):
# Set default title
def_title = self.getDefaultTitle(info)
# Default profile and category
default_profile = {}
if not params.get('profile_id'):
default_profile = fireEvent('profile.default', single = True)
cat_id = params.get('category_id')
media = {
'_t': 'media',
'type': 'show',
'title': def_title,
'identifiers': info.get('identifiers'),
'status': status if status else 'active',
'profile_id': params.get('profile_id', default_profile.get('_id')),
'category_id': cat_id if cat_id is not None and len(cat_id) > 0 and cat_id != '-1' else None
}
identifiers = info.pop('identifiers', {})
seasons = info.pop('seasons', {})
# Update media with info
self.updateInfo(media, info)
existing_show = fireEvent('media.with_identifiers', params.get('identifiers'), with_doc = True, types = [self._type], single = True)
db = get_db()
if existing_show:
s = existing_show['doc']
s.update(media)
show = db.update(s)
else:
show = db.insert(media)
# Update dict to be usable
show.update(media)
added = True
do_search = False
search_after = search_after and self.conf('search_on_add', section = 'showsearcher')
onComplete = None
if existing_show:
if search_after:
onComplete = self.createOnComplete(show['_id'])
search_after = False
elif force_readd:
# Clean snatched history
for release in fireEvent('release.for_media', show['_id'], single = True):
if release.get('status') in ['downloaded', 'snatched', 'done']:
if params.get('ignore_previous', False):
release['status'] = 'ignored'
db.update(release)
else:
fireEvent('release.delete', release['_id'], single = True)
show['profile_id'] = params.get('profile_id', default_profile.get('id'))
show['category_id'] = media.get('category_id')
show['last_edit'] = int(time.time())
do_search = True
db.update(show)
else:
params.pop('info', None)
log.debug('Show already exists, not updating: %s', params)
added = False
# Create episodes
self.createEpisodes(show, seasons)
# Trigger update info
if added and update_after:
# Do full update to get images etc
fireEventAsync('show.update_extras', show.copy(), info, store = True, on_complete = onComplete)
# Remove releases
for rel in fireEvent('release.for_media', show['_id'], single = True):
if rel['status'] is 'available':
db.delete(rel)
if do_search and search_after:
onComplete = self.createOnComplete(show['_id'])
onComplete()
return show, added
def createEpisodes(self, m, seasons_info):
# Add Seasons
for season_nr in seasons_info:
season_info = seasons_info[season_nr]
episodes = season_info.get('episodes', {})
season = fireEvent('show.season.add', m.get('_id'), season_info, update_after = False, single = True)
# Add Episodes
for episode_nr in episodes:
episode_info = episodes[episode_nr]
episode_info['season_number'] = season_nr
fireEvent('show.episode.add', season.get('_id'), episode_info, update_after = False, single = True)
def update(self, media_id = None, media = None, identifiers = None, info = None):
"""
Update movie information inside media['doc']['info']
@param media_id: document id
@param identifiers: identifiers from multiple providers
{
'thetvdb': 123,
'imdb': 'tt123123',
..
}
@param extended: update with extended info (parses more info, actors, images from some info providers)
@return: dict, with media
"""
if not info: info = {}
if not identifiers: identifiers = {}
db = get_db()
if self.shuttingDown():
return
if media is None and media_id:
media = db.get('id', media_id)
else:
log.error('missing "media" and "media_id" parameters, unable to update')
return
if not info:
info = fireEvent('show.info', identifiers = media.get('identifiers'), merge = True)
try:
identifiers = info.pop('identifiers', {})
seasons = info.pop('seasons', {})
self.updateInfo(media, info)
self.updateEpisodes(media, seasons)
self.updateExtras(media, info)
db.update(media)
return media
except:
log.error('Failed update media: %s', traceback.format_exc())
return {}
def updateInfo(self, media, info):
# Remove season info for later use (save separately)
info.pop('in_wanted', None)
info.pop('in_library', None)
if not info or len(info) == 0:
log.error('Could not update, no show info to work with: %s', media.get('identifier'))
return False
# Update basic info
media['info'] = info
def updateEpisodes(self, media, seasons):
# Fetch current season/episode tree
show_tree = fireEvent('library.tree', media_id = media['_id'], single = True)
# Update seasons
for season_num in seasons:
season_info = seasons[season_num]
episodes = season_info.get('episodes', {})
# Find season that matches number
season = find(lambda s: s.get('info', {}).get('number', 0) == season_num, show_tree.get('seasons', []))
if not season:
log.warning('Unable to find season "%s"', season_num)
continue
# Update season
fireEvent('show.season.update', season['_id'], info = season_info, single = True)
# Update episodes
for episode_num in episodes:
episode_info = episodes[episode_num]
episode_info['season_number'] = season_num
# Find episode that matches number
episode = find(lambda s: s.get('info', {}).get('number', 0) == episode_num, season.get('episodes', []))
if not episode:
log.debug('Creating new episode %s in season %s', (episode_num, season_num))
fireEvent('show.episode.add', season.get('_id'), episode_info, update_after = False, single = True)
continue
fireEvent('show.episode.update', episode['_id'], info = episode_info, single = True)
def updateExtras(self, media, info, store=False):
db = get_db()
# Update image file
image_urls = info.get('images', [])
self.getPoster(media, image_urls)
if store:
db.update(media)

96
couchpotato/core/media/show/_base/season.py

@ -0,0 +1,96 @@
from couchpotato import get_db
from couchpotato.core.event import addEvent, fireEvent, fireEventAsync
from couchpotato.core.logger import CPLog
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.media import MediaBase
log = CPLog(__name__)
autoload = 'Season'
class Season(MediaBase):
_type = 'show.season'
def __init__(self):
addEvent('show.season.add', self.add)
addEvent('show.season.update', self.update)
addEvent('show.season.update_extras', self.updateExtras)
def add(self, parent_id, info = None, update_after = True, status = None):
if not info: info = {}
identifiers = info.pop('identifiers', None)
info.pop('episodes', None)
# Add Season
season_info = {
'_t': 'media',
'type': 'show.season',
'identifiers': identifiers,
'status': status if status else 'active',
'parent_id': parent_id,
'info': info, # Returned dict by providers
}
# Check if season already exists
existing_season = fireEvent('media.with_identifiers', identifiers, with_doc = True, types = [self._type], single = True)
db = get_db()
if existing_season:
s = existing_season['doc']
s.update(season_info)
season = db.update(s)
else:
season = db.insert(season_info)
# Update library info
if update_after is not False:
handle = fireEventAsync if update_after is 'async' else fireEvent
handle('show.season.update_extras', season, info, store = True, single = True)
return season
def update(self, media_id = None, identifiers = None, info = None):
if not info: info = {}
if self.shuttingDown():
return
db = get_db()
season = db.get('id', media_id)
show = db.get('id', season['parent_id'])
# Get new info
if not info:
info = fireEvent('season.info', show.get('identifiers'), {
'season_number': season.get('info', {}).get('number', 0)
}, merge = True)
identifiers = info.pop('identifiers', None) or identifiers
info.pop('episodes', None)
# Update/create media
season['identifiers'].update(identifiers)
season.update({'info': info})
self.updateExtras(season, info)
db.update(season)
return season
def updateExtras(self, season, info, store=False):
db = get_db()
# Get images
image_urls = info.get('images', [])
existing_files = season.get('files', {})
self.getPoster(image_urls, existing_files)
if store:
db.update(season)

0
couchpotato/core/media/show/_base/static/episode.actions.js

128
couchpotato/core/media/show/_base/static/episode.js

@ -0,0 +1,128 @@
var Episode = new Class({
Extends: BlockBase,
action: {},
initialize: function(show, options, data){
var self = this;
self.setOptions(options);
self.show = show;
self.options = options;
self.data = data;
self.profile = self.show.profile;
self.el = new Element('div.item.episode').adopt(
self.detail = new Element('div.item.data')
);
self.create();
},
create: function(){
var self = this;
self.detail.set('id', 'episode_'+self.data._id);
self.detail.adopt(
new Element('span.episode', {'text': (self.data.info.number || 0)}),
new Element('span.name', {'text': self.getTitle()}),
new Element('span.firstaired', {'text': self.data.info.firstaired}),
self.quality = new Element('span.quality', {
'events': {
'click': function(e){
var releases = self.detail.getElement('.item-actions .releases');
if(releases.isVisible())
releases.fireEvent('click', [e])
}
}
}),
self.actions = new Element('div.item-actions')
);
// Add profile
if(self.profile.data) {
self.profile.getTypes().each(function(type){
var q = self.addQuality(type.get('quality'), type.get('3d'));
if((type.finish == true || type.get('finish')) && !q.hasClass('finish')){
q.addClass('finish');
q.set('title', q.get('title') + ' Will finish searching for this movie if this quality is found.')
}
});
}
// Add releases
self.updateReleases();
Object.each(self.options.actions, function(action, key){
self.action[key.toLowerCase()] = action = new self.options.actions[key](self);
if(action.el)
self.actions.adopt(action)
});
},
updateReleases: function(){
var self = this;
if(!self.data.releases || self.data.releases.length == 0) return;
self.data.releases.each(function(release){
var q = self.quality.getElement('.q_'+ release.quality+(release.is_3d ? '.is_3d' : ':not(.is_3d)')),
status = release.status;
if(!q && (status == 'snatched' || status == 'seeding' || status == 'done'))
q = self.addQuality(release.quality, release.is_3d || false);
if (q && !q.hasClass(status)){
q.addClass(status);
q.set('title', (q.get('title') ? q.get('title') : '') + ' status: '+ status)
}
});
},
addQuality: function(quality, is_3d){
var self = this,
q = Quality.getQuality(quality);
return new Element('span', {
'text': q.label + (is_3d ? ' 3D' : ''),
'class': 'q_'+q.identifier + (is_3d ? ' is_3d' : ''),
'title': ''
}).inject(self.quality);
},
getTitle: function(){
var self = this;
var title = '';
if(self.data.info.titles && self.data.info.titles.length > 0) {
title = self.data.info.titles[0];
} else {
title = 'Episode ' + self.data.info.number;
}
return title;
},
getIdentifier: function(){
var self = this;
try {
return self.get('identifiers').imdb;
}
catch (e){ }
return self.get('imdb');
},
get: function(attr){
return this.data[attr] || this.data.info[attr]
}
});

8
couchpotato/core/media/show/_base/static/list.js

@ -0,0 +1,8 @@
var ShowList = new Class({
Extends: MovieList,
media_type: 'show',
list_key: 'shows'
});

56
couchpotato/core/media/show/_base/static/page.js

@ -0,0 +1,56 @@
Page.Shows = new Class({
Extends: PageBase,
name: 'shows',
icon: 'show',
sub_pages: ['Wanted'],
default_page: 'Wanted',
current_page: null,
initialize: function(parent, options){
var self = this;
self.parent(parent, options);
self.navigation = new BlockNavigation();
$(self.navigation).inject(self.content, 'top');
App.on('shows.enabled', self.toggleShows.bind(self));
},
defaultAction: function(action, params){
var self = this;
if(self.current_page){
self.current_page.hide();
if(self.current_page.list && self.current_page.list.navigation)
self.current_page.list.navigation.dispose();
}
var route = new Route();
route.parse(action);
var page_name = route.getPage() != 'index' ? route.getPage().capitalize() : self.default_page;
var page = self.sub_pages.filter(function(page){
return page.name == page_name;
}).pick()['class'];
page.open(route.getAction() || 'index', params);
page.show();
if(page.list && page.list.navigation)
page.list.navigation.inject(self.navigation);
self.current_page = page;
self.navigation.activate(page_name.toLowerCase());
},
toggleShows: function(notification) {
document.body[notification.data === true ? 'addClass' : 'removeClass']('show_support');
}
});

7
couchpotato/core/media/show/_base/static/search.js

@ -0,0 +1,7 @@
var BlockSearchShowItem = new Class({
Extends: BlockSearchMovieItem,
media_type: 'movie'
});

127
couchpotato/core/media/show/_base/static/season.js

@ -0,0 +1,127 @@
var Season = new Class({
Extends: BlockBase,
action: {},
initialize: function(show, options, data){
var self = this;
self.setOptions(options);
self.show = show;
self.options = options;
self.data = data;
self.profile = self.show.profile;
self.el = new Element('div.item.season').adopt(
self.detail = new Element('div.item.data')
);
self.create();
},
create: function(){
var self = this;
self.detail.set('id', 'season_'+self.data._id);
self.detail.adopt(
new Element('span.name', {'text': self.getTitle()}),
self.quality = new Element('span.quality', {
'events': {
'click': function(e){
var releases = self.detail.getElement('.item-actions .releases');
if(releases.isVisible())
releases.fireEvent('click', [e])
}
}
}),
self.actions = new Element('div.item-actions')
);
// Add profile
if(self.profile.data) {
self.profile.getTypes().each(function(type){
var q = self.addQuality(type.get('quality'), type.get('3d'));
if((type.finish == true || type.get('finish')) && !q.hasClass('finish')){
q.addClass('finish');
q.set('title', q.get('title') + ' Will finish searching for this movie if this quality is found.')
}
});
}
// Add releases
self.updateReleases();
Object.each(self.options.actions, function(action, key){
self.action[key.toLowerCase()] = action = new self.options.actions[key](self);
if(action.el)
self.actions.adopt(action)
});
},
updateReleases: function(){
var self = this;
if(!self.data.releases || self.data.releases.length == 0) return;
self.data.releases.each(function(release){
var q = self.quality.getElement('.q_'+ release.quality+(release.is_3d ? '.is_3d' : ':not(.is_3d)')),
status = release.status;
if(!q && (status == 'snatched' || status == 'seeding' || status == 'done'))
q = self.addQuality(release.quality, release.is_3d || false);
if (q && !q.hasClass(status)){
q.addClass(status);
q.set('title', (q.get('title') ? q.get('title') : '') + ' status: '+ status)
}
});
},
addQuality: function(quality, is_3d){
var self = this,
q = Quality.getQuality(quality);
return new Element('span', {
'text': q.label + (is_3d ? ' 3D' : ''),
'class': 'q_'+q.identifier + (is_3d ? ' is_3d' : ''),
'title': ''
}).inject(self.quality);
},
getTitle: function(){
var self = this;
var title = '';
if(self.data.info.number) {
title = 'Season ' + self.data.info.number;
} else {
// Season 0 / Specials
title = 'Specials';
}
return title;
},
getIdentifier: function(){
var self = this;
try {
return self.get('identifiers').imdb;
}
catch (e){ }
return self.get('imdb');
},
get: function(attr){
return this.data[attr] || this.data.info[attr]
}
});

92
couchpotato/core/media/show/_base/static/show.episodes.js

@ -0,0 +1,92 @@
var Episodes = new Class({
initialize: function(show, options) {
var self = this;
self.show = show;
self.options = options;
},
open: function(){
var self = this;
if(!self.container){
self.container = new Element('div.options').grab(
self.episodes_container = new Element('div.episodes.table')
);
self.container.inject(self.show, 'top');
Api.request('library.tree', {
'data': {
'media_id': self.show.data._id
},
'onComplete': function(json){
self.data = json.result;
self.createEpisodes();
}
});
}
self.show.slide('in', self.container, true);
},
createEpisodes: function() {
var self = this;
self.data.seasons.sort(self.sortSeasons);
self.data.seasons.each(function(season) {
self.createSeason(season);
season.episodes.sort(self.sortEpisodes);
season.episodes.each(function(episode) {
self.createEpisode(episode);
});
});
},
createSeason: function(season) {
var self = this,
s = new Season(self.show, self.options, season);
$(s).inject(self.episodes_container);
},
createEpisode: function(episode){
var self = this,
e = new Episode(self.show, self.options, episode);
$(e).inject(self.episodes_container);
},
sortSeasons: function(a, b) {
// Move "Specials" to the bottom of the list
if(!a.info.number) {
return 1;
}
if(!b.info.number) {
return -1;
}
// Order seasons descending
if(a.info.number < b.info.number)
return -1;
if(a.info.number > b.info.number)
return 1;
return 0;
},
sortEpisodes: function(a, b) {
// Order episodes descending
if(a.info.number < b.info.number)
return -1;
if(a.info.number > b.info.number)
return 1;
return 0;
}
});

5
couchpotato/core/media/show/_base/static/show.js

@ -0,0 +1,5 @@
var Show = new Class({
Extends: Movie
});

1225
couchpotato/core/media/show/_base/static/show.scss

File diff suppressed because it is too large

28
couchpotato/core/media/show/_base/static/wanted.js

@ -0,0 +1,28 @@
var ShowsWanted = new Class({
Extends: PageBase,
name: 'wanted',
title: 'List of TV Shows subscribed to',
folder_browser: null,
has_tab: false,
indexAction: function(){
var self = this;
if(!self.wanted){
// Wanted movies
self.wanted = new ShowList({
'identifier': 'wanted',
'status': 'active',
'type': 'show',
'actions': [MA.IMDB, MA.Release, MA.Refresh, MA.Delete],
'add_new': true,
'on_empty_element': App.createUserscriptButtons().addClass('empty_wanted')
});
$(self.wanted).inject(self.content);
}
}
});

0
couchpotato/core/media/show/library/__init__.py

71
couchpotato/core/media/show/library/episode.py

@ -0,0 +1,71 @@
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.library.base import LibraryBase
log = CPLog(__name__)
autoload = 'EpisodeLibraryPlugin'
class EpisodeLibraryPlugin(LibraryBase):
def __init__(self):
addEvent('library.query', self.query)
addEvent('library.identifier', self.identifier)
def query(self, media, first = True, condense = True, include_identifier = True, **kwargs):
if media.get('type') != 'show.episode':
return
related = fireEvent('library.related', media, single = True)
# Get season titles
titles = fireEvent(
'library.query', related['season'],
first = False,
include_identifier = include_identifier,
condense = condense,
single = True
)
# Add episode identifier to titles
if include_identifier:
identifier = fireEvent('library.identifier', media, single = True)
if identifier and identifier.get('episode'):
titles = [title + ('E%02d' % identifier['episode']) for title in titles]
if first:
return titles[0] if titles else None
return titles
def identifier(self, media):
if media.get('type') != 'show.episode':
return
identifier = {
'season': None,
'episode': None
}
# TODO identifier mapping
# scene_map = media['info'].get('map_episode', {}).get('scene')
# if scene_map:
# # Use scene mappings if they are available
# identifier['season'] = scene_map.get('season_nr')
# identifier['episode'] = scene_map.get('episode_nr')
# else:
# Fallback to normal season/episode numbers
identifier['season'] = media['info'].get('season_number')
identifier['episode'] = media['info'].get('number')
# Cast identifiers to integers
# TODO this will need changing to support identifiers with trailing 'a', 'b' characters
identifier['season'] = tryInt(identifier['season'], None)
identifier['episode'] = tryInt(identifier['episode'], None)
return identifier

52
couchpotato/core/media/show/library/season.py

@ -0,0 +1,52 @@
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.library.base import LibraryBase
log = CPLog(__name__)
autoload = 'SeasonLibraryPlugin'
class SeasonLibraryPlugin(LibraryBase):
def __init__(self):
addEvent('library.query', self.query)
addEvent('library.identifier', self.identifier)
def query(self, media, first = True, condense = True, include_identifier = True, **kwargs):
if media.get('type') != 'show.season':
return
related = fireEvent('library.related', media, single = True)
# Get show titles
titles = fireEvent(
'library.query', related['show'],
first = False,
condense = condense,
single = True
)
# TODO map_names
# Add season identifier to titles
if include_identifier:
identifier = fireEvent('library.identifier', media, single = True)
if identifier and identifier.get('season') is not None:
titles = [title + (' S%02d' % identifier['season']) for title in titles]
if first:
return titles[0] if titles else None
return titles
def identifier(self, media):
if media.get('type') != 'show.season':
return
return {
'season': tryInt(media['info']['number'], None)
}

38
couchpotato/core/media/show/library/show.py

@ -0,0 +1,38 @@
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.encoding import simplifyString
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.library.base import LibraryBase
from qcond import QueryCondenser
log = CPLog(__name__)
autoload = 'ShowLibraryPlugin'
class ShowLibraryPlugin(LibraryBase):
query_condenser = QueryCondenser()
def __init__(self):
addEvent('library.query', self.query)
def query(self, media, first = True, condense = True, include_identifier = True, **kwargs):
if media.get('type') != 'show':
return
titles = media['info']['titles']
if condense:
# Use QueryCondenser to build a list of optimal search titles
condensed_titles = self.query_condenser.distinct(titles)
if condensed_titles:
# Use condensed titles if we got a valid result
titles = condensed_titles
else:
# Fallback to simplifying titles
titles = [simplifyString(title) for title in titles]
if first:
return titles[0] if titles else None
return titles

7
couchpotato/core/media/show/matcher/__init__.py

@ -0,0 +1,7 @@
from .main import ShowMatcher
def autoload():
return ShowMatcher()
config = []

61
couchpotato/core/media/show/matcher/base.py

@ -0,0 +1,61 @@
from couchpotato import fireEvent, CPLog, tryInt
from couchpotato.core.event import addEvent
from couchpotato.core.media._base.matcher.base import MatcherBase
log = CPLog(__name__)
class Base(MatcherBase):
def __init__(self):
super(Base, self).__init__()
addEvent('%s.matcher.correct_identifier' % self.type, self.correctIdentifier)
def correct(self, chain, release, media, quality):
log.info("Checking if '%s' is valid", release['name'])
log.info2('Release parsed as: %s', chain.info)
if not fireEvent('%s.matcher.correct_identifier' % self.type, chain, media):
log.info('Wrong: %s, identifier does not match', release['name'])
return False
if not fireEvent('matcher.correct_title', chain, media):
log.info("Wrong: '%s', undetermined naming.", (' '.join(chain.info['show_name'])))
return False
return True
def correctIdentifier(self, chain, media):
raise NotImplementedError()
def getChainIdentifier(self, chain):
if 'identifier' not in chain.info:
return None
identifier = self.flattenInfo(chain.info['identifier'])
# Try cast values to integers
for key, value in identifier.items():
if isinstance(value, list):
if len(value) <= 1:
value = value[0]
else:
# It might contain multiple season or episode values, but
# there's a chance that it contains the same identifier
# multiple times.
x, y = None, None
for y in value:
y = tryInt(y, None)
if x is None:
x = y
elif x is None or y is None or x != y:
break
if x is not None and y is not None and x == y:
value = value[0]
else:
log.warning('Wrong: identifier contains multiple season or episode values, unsupported: %s' % repr(value))
return None
identifier[key] = tryInt(value, value)
return identifier

30
couchpotato/core/media/show/matcher/episode.py

@ -0,0 +1,30 @@
from couchpotato import fireEvent, CPLog
from couchpotato.core.media.show.matcher.base import Base
log = CPLog(__name__)
class Episode(Base):
type = 'show.episode'
def correctIdentifier(self, chain, media):
identifier = self.getChainIdentifier(chain)
if not identifier:
log.info2('Wrong: release identifier is not valid (unsupported or missing identifier)')
return False
# TODO - Parse episode ranges from identifier to determine if they are multi-part episodes
if any([x in identifier for x in ['episode_from', 'episode_to']]):
log.info2('Wrong: releases with identifier ranges are not supported yet')
return False
required = fireEvent('library.identifier', media, single = True)
# TODO - Support air by date episodes
# TODO - Support episode parts
if identifier != required:
log.info2('Wrong: required identifier (%s) does not match release identifier (%s)', (required, identifier))
return False
return True

9
couchpotato/core/media/show/matcher/main.py

@ -0,0 +1,9 @@
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.matcher.episode import Episode
from couchpotato.core.media.show.matcher.season import Season
class ShowMatcher(MultiProvider):
def getTypes(self):
return [Season, Episode]

27
couchpotato/core/media/show/matcher/season.py

@ -0,0 +1,27 @@
from couchpotato import fireEvent, CPLog
from couchpotato.core.media.show.matcher.base import Base
log = CPLog(__name__)
class Season(Base):
type = 'show.season'
def correctIdentifier(self, chain, media):
identifier = self.getChainIdentifier(chain)
if not identifier:
log.info2('Wrong: release identifier is not valid (unsupported or missing identifier)')
return False
# TODO - Parse episode ranges from identifier to determine if they are season packs
if any([x in identifier for x in ['episode_from', 'episode_to']]):
log.info2('Wrong: releases with identifier ranges are not supported yet')
return False
required = fireEvent('library.identifier', media, single = True)
if identifier != required:
log.info2('Wrong: required identifier (%s) does not match release identifier (%s)', (required, identifier))
return False
return True

0
couchpotato/core/media/show/providers/__init__.py

13
couchpotato/core/media/show/providers/base.py

@ -0,0 +1,13 @@
from couchpotato.core.media._base.providers.info.base import BaseInfoProvider
class ShowProvider(BaseInfoProvider):
type = 'show'
class SeasonProvider(BaseInfoProvider):
type = 'show.season'
class EpisodeProvider(BaseInfoProvider):
type = 'show.episode'

0
couchpotato/core/media/show/providers/info/__init__.py

376
couchpotato/core/media/show/providers/info/thetvdb.py

@ -0,0 +1,376 @@
from datetime import datetime
import os
import traceback
from couchpotato import Env
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.encoding import simplifyString, toUnicode
from couchpotato.core.helpers.variable import splitString, tryInt, tryFloat
from couchpotato.core.logger import CPLog
from couchpotato.core.media.show.providers.base import ShowProvider
from tvdb_api import tvdb_exceptions
from tvdb_api.tvdb_api import Tvdb, Show
log = CPLog(__name__)
autoload = 'TheTVDb'
class TheTVDb(ShowProvider):
# TODO: Consider grabbing zips to put less strain on tvdb
# TODO: Unicode stuff (check)
# TODO: Notigy frontend on error (tvdb down at monent)
# TODO: Expose apikey in setting so it can be changed by user
def __init__(self):
addEvent('show.info', self.getShowInfo, priority = 1)
addEvent('season.info', self.getSeasonInfo, priority = 1)
addEvent('episode.info', self.getEpisodeInfo, priority = 1)
self.tvdb_api_parms = {
'apikey': self.conf('api_key'),
'banners': True,
'language': 'en',
'cache': os.path.join(Env.get('cache_dir'), 'thetvdb_api'),
}
self._setup()
def _setup(self):
self.tvdb = Tvdb(**self.tvdb_api_parms)
self.valid_languages = self.tvdb.config['valid_languages']
def getShow(self, identifier = None):
show = None
try:
log.debug('Getting show: %s', identifier)
show = self.tvdb[int(identifier)]
except (tvdb_exceptions.tvdb_error, IOError), e:
log.error('Failed to getShowInfo for show id "%s": %s', (identifier, traceback.format_exc()))
return None
return show
def getShowInfo(self, identifiers = None):
"""
@param identifiers: dict with identifiers per provider
@return: Full show info including season and episode info
"""
if not identifiers or not identifiers.get('thetvdb'):
return None
identifier = tryInt(identifiers.get('thetvdb'))
cache_key = 'thetvdb.cache.show.%s' % identifier
result = None #self.getCache(cache_key)
if result:
return result
show = self.getShow(identifier = identifier)
if show:
result = self._parseShow(show)
self.setCache(cache_key, result)
return result or {}
def getSeasonInfo(self, identifiers = None, params = {}):
"""Either return a list of all seasons or a single season by number.
identifier is the show 'id'
"""
if not identifiers or not identifiers.get('thetvdb'):
return None
season_number = params.get('season_number', None)
identifier = tryInt(identifiers.get('thetvdb'))
cache_key = 'thetvdb.cache.%s.%s' % (identifier, season_number)
log.debug('Getting SeasonInfo: %s', cache_key)
result = self.getCache(cache_key) or {}
if result:
return result
try:
show = self.tvdb[int(identifier)]
except (tvdb_exceptions.tvdb_error, IOError), e:
log.error('Failed parsing TheTVDB SeasonInfo for "%s" id "%s": %s', (show, identifier, traceback.format_exc()))
return False
result = []
for number, season in show.items():
if season_number is not None and number == season_number:
result = self._parseSeason(show, number, season)
self.setCache(cache_key, result)
return result
else:
result.append(self._parseSeason(show, number, season))
self.setCache(cache_key, result)
return result
def getEpisodeInfo(self, identifier = None, params = {}):
"""Either return a list of all episodes or a single episode.
If episode_identifer contains an episode number to search for
"""
season_number = self.getIdentifier(params.get('season_number', None))
episode_identifier = self.getIdentifier(params.get('episode_identifiers', None))
identifier = self.getIdentifier(identifier)
if not identifier and season_number is None:
return False
# season_identifier must contain the 'show id : season number' since there is no tvdb id
# for season and we need a reference to both the show id and season number
if not identifier and season_number:
try:
identifier, season_number = season_number.split(':')
season_number = int(season_number)
except: return None
identifier = tryInt(identifier)
cache_key = 'thetvdb.cache.%s.%s.%s' % (identifier, episode_identifier, season_number)
log.debug('Getting EpisodeInfo: %s', cache_key)
result = self.getCache(cache_key) or {}
if result:
return result
try:
show = self.tvdb[identifier]
except (tvdb_exceptions.tvdb_error, IOError), e:
log.error('Failed parsing TheTVDB EpisodeInfo for "%s" id "%s": %s', (show, identifier, traceback.format_exc()))
return False
result = []
for number, season in show.items():
if season_number is not None and number != season_number:
continue
for episode in season.values():
if episode_identifier is not None and episode['id'] == toUnicode(episode_identifier):
result = self._parseEpisode(episode)
self.setCache(cache_key, result)
return result
else:
result.append(self._parseEpisode(episode))
self.setCache(cache_key, result)
return result
def getIdentifier(self, value):
if type(value) is dict:
return value.get('thetvdb')
return value
def _parseShow(self, show):
#
# NOTE: show object only allows direct access via
# show['id'], not show.get('id')
#
def get(name):
return show.get(name) if not hasattr(show, 'search') else show[name]
## Images
poster = get('poster')
backdrop = get('fanart')
genres = splitString(get('genre'), '|')
if get('firstaired') is not None:
try: year = datetime.strptime(get('firstaired'), '%Y-%m-%d').year
except: year = None
else:
year = None
show_data = {
'identifiers': {
'thetvdb': tryInt(get('id')),
'imdb': get('imdb_id'),
'zap2it': get('zap2it_id'),
},
'type': 'show',
'titles': [get('seriesname')],
'images': {
'poster': [poster] if poster else [],
'backdrop': [backdrop] if backdrop else [],
'poster_original': [],
'backdrop_original': [],
},
'year': year,
'genres': genres,
'network': get('network'),
'plot': get('overview'),
'networkid': get('networkid'),
'air_day': (get('airs_dayofweek') or '').lower(),
'air_time': self.parseTime(get('airs_time')),
'firstaired': get('firstaired'),
'runtime': tryInt(get('runtime')),
'contentrating': get('contentrating'),
'rating': {},
'actors': splitString(get('actors'), '|'),
'status': get('status'),
'language': get('language'),
}
if tryFloat(get('rating')):
show_data['rating']['thetvdb'] = [tryFloat(get('rating')), tryInt(get('ratingcount'))],
show_data = dict((k, v) for k, v in show_data.iteritems() if v)
# Only load season info when available
if type(show) == Show:
# Parse season and episode data
show_data['seasons'] = {}
for season_nr in show:
season = self._parseSeason(show, season_nr, show[season_nr])
season['episodes'] = {}
for episode_nr in show[season_nr]:
season['episodes'][episode_nr] = self._parseEpisode(show[season_nr][episode_nr])
show_data['seasons'][season_nr] = season
# Add alternative titles
# try:
# raw = self.tvdb.search(show['seriesname'])
# if raw:
# for show_info in raw:
# print show_info
# if show_info['id'] == show_data['id'] and show_info.get('aliasnames', None):
# for alt_name in show_info['aliasnames'].split('|'):
# show_data['titles'].append(toUnicode(alt_name))
# except (tvdb_exceptions.tvdb_error, IOError), e:
# log.error('Failed searching TheTVDB for "%s": %s', (show['seriesname'], traceback.format_exc()))
return show_data
def _parseSeason(self, show, number, season):
"""
contains no data
"""
poster = []
try:
temp_poster = {}
for id, data in show.data['_banners']['season']['season'].items():
if data.get('season') == str(number) and data.get('language') == self.tvdb_api_parms['language']:
temp_poster[tryFloat(data.get('rating')) * tryInt(data.get('ratingcount'))] = data.get('_bannerpath')
#break
poster.append(temp_poster[sorted(temp_poster, reverse = True)[0]])
except:
pass
identifier = tryInt(
show['id'] if show.get('id') else show[number][1]['seasonid'])
season_data = {
'identifiers': {
'thetvdb': identifier
},
'number': tryInt(number),
'images': {
'poster': poster,
},
}
season_data = dict((k, v) for k, v in season_data.iteritems() if v)
return season_data
def _parseEpisode(self, episode):
"""
('episodenumber', u'1'),
('thumb_added', None),
('rating', u'7.7'),
('overview',
u'Experienced waitress Max Black meets her new co-worker, former rich-girl Caroline Channing, and puts her skills to the test at an old but re-emerging Brooklyn diner. Despite her initial distaste for Caroline, Max eventually softens and the two team up for a new business venture.'),
('dvd_episodenumber', None),
('dvd_discid', None),
('combined_episodenumber', u'1'),
('epimgflag', u'7'),
('id', u'4099506'),
('seasonid', u'465948'),
('thumb_height', u'225'),
('tms_export', u'1374789754'),
('seasonnumber', u'1'),
('writer', u'|Michael Patrick King|Whitney Cummings|'),
('lastupdated', u'1371420338'),
('filename', u'http://thetvdb.com/banners/episodes/248741/4099506.jpg'),
('absolute_number', u'1'),
('ratingcount', u'102'),
('combined_season', u'1'),
('thumb_width', u'400'),
('imdb_id', u'tt1980319'),
('director', u'James Burrows'),
('dvd_chapter', None),
('dvd_season', None),
('gueststars',
u'|Brooke Lyons|Noah Mills|Shoshana Bush|Cale Hartmann|Adam Korson|Alex Enriquez|Matt Cook|Bill Parks|Eugene Shaw|Sergey Brusilovsky|Greg Lewis|Cocoa Brown|Nick Jameson|'),
('seriesid', u'248741'),
('language', u'en'),
('productioncode', u'296793'),
('firstaired', u'2011-09-19'),
('episodename', u'Pilot')]
"""
def get(name, default = None):
return episode.get(name, default)
poster = get('filename', [])
episode_data = {
'number': tryInt(get('episodenumber')),
'absolute_number': tryInt(get('absolute_number')),
'identifiers': {
'thetvdb': tryInt(episode['id'])
},
'type': 'episode',
'titles': [get('episodename')] if get('episodename') else [],
'images': {
'poster': [poster] if poster else [],
},
'released': get('firstaired'),
'plot': get('overview'),
'firstaired': get('firstaired'),
'language': get('language'),
}
if get('imdb_id'):
episode_data['identifiers']['imdb'] = get('imdb_id')
episode_data = dict((k, v) for k, v in episode_data.iteritems() if v)
return episode_data
def parseTime(self, time):
return time
def isDisabled(self):
if self.conf('api_key') == '':
log.error('No API key provided.')
return True
else:
return False
config = [{
'name': 'thetvdb',
'groups': [
{
'tab': 'providers',
'name': 'tmdb',
'label': 'TheTVDB',
'hidden': True,
'description': 'Used for all calls to TheTVDB.',
'options': [
{
'name': 'api_key',
'default': '7966C02F860586D2',
'label': 'Api Key',
},
],
},
],
}]

64
couchpotato/core/media/show/providers/info/trakt.py

@ -0,0 +1,64 @@
from couchpotato.core.event import addEvent
from couchpotato.core.logger import CPLog
from couchpotato.core.media.movie.providers.automation.trakt.main import TraktBase
from couchpotato.core.media.show.providers.base import ShowProvider
log = CPLog(__name__)
autoload = 'Trakt'
class Trakt(ShowProvider, TraktBase):
def __init__(self):
addEvent('info.search', self.search, priority = 1)
addEvent('show.search', self.search, priority = 1)
def search(self, q, limit = 12):
if self.isDisabled() or not self.conf('enabled', section='shows'):
log.debug('Not searching for show: %s', q)
return False
# Search
log.debug('Searching for show: "%s"', q)
response = self.call('search?type=show&query=%s' % (q))
results = []
for show in response:
results.append(self._parseShow(show.get('show')))
for result in results:
if 'year' in result:
log.info('Found: %s', result['titles'][0] + ' (' + str(result.get('year', 0)) + ')')
else:
log.info('Found: %s', result['titles'][0])
return results
def _parseShow(self, show):
# Images
images = show.get('images', {})
poster = images.get('poster', {}).get('thumb')
backdrop = images.get('fanart', {}).get('thumb')
# Build show dict
show_data = {
'identifiers': {
'thetvdb': show.get('ids', {}).get('tvdb'),
'imdb': show.get('ids', {}).get('imdb'),
'tvrage': show.get('ids', {}).get('tvrage'),
},
'type': 'show',
'titles': [show.get('title')],
'images': {
'poster': [poster] if poster else [],
'backdrop': [backdrop] if backdrop else [],
'poster_original': [],
'backdrop_original': [],
},
'year': show.get('year'),
}
return dict((k, v) for k, v in show_data.iteritems() if v)

285
couchpotato/core/media/show/providers/info/tvrage.py

@ -0,0 +1,285 @@
from datetime import datetime
import os
import traceback
from couchpotato import Env
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.encoding import simplifyString, toUnicode
from couchpotato.core.helpers.variable import splitString, tryInt, tryFloat
from couchpotato.core.logger import CPLog
from couchpotato.core.media.show.providers.base import ShowProvider
from tvrage_api import tvrage_api
from tvrage_api import tvrage_exceptions
from tvrage_api.tvrage_api import Show
log = CPLog(__name__)
autoload = 'TVRage'
class TVRage(ShowProvider):
def __init__(self):
# Search is handled by Trakt exclusively as search functionality has
# been removed from TheTVDB provider as well.
addEvent('show.info', self.getShowInfo, priority = 3)
addEvent('season.info', self.getSeasonInfo, priority = 3)
addEvent('episode.info', self.getEpisodeInfo, priority = 3)
self.tvrage_api_parms = {
'apikey': self.conf('api_key'),
'language': 'en',
'cache': os.path.join(Env.get('cache_dir'), 'tvrage_api')
}
self._setup()
def _setup(self):
self.tvrage = tvrage_api.TVRage(**self.tvrage_api_parms)
self.valid_languages = self.tvrage.config['valid_languages']
def getShow(self, identifier):
show = None
try:
log.debug('Getting show: %s', identifier)
show = self.tvrage[int(identifier)]
except (tvrage_exceptions.tvrage_error, IOError), e:
log.error('Failed to getShowInfo for show id "%s": %s', (identifier, traceback.format_exc()))
return show
def getShowInfo(self, identifiers = None):
if not identifiers:
# Raise exception instead? Invocation is clearly wrong!
return None
if 'tvrage' not in identifiers:
# TVRage identifier unavailable, but invocation was valid.
return None
identifier = tryInt(identifiers['tvrage'], None)
if identifier is None:
# Raise exception instead? Invocation is clearly wrong!
return None
cache_key = 'tvrage.cache.show.%s' % identifier
result = self.getCache(cache_key) or []
if not result:
show = self.getShow(identifier)
if show is not None:
result = self._parseShow(show)
self.setCache(cache_key, result)
return result
def getSeasonInfo(self, identifiers = None, params = {}):
"""Either return a list of all seasons or a single season by number.
identifier is the show 'id'
"""
if not identifiers:
# Raise exception instead? Invocation is clearly wrong!
return None
if 'tvrage' not in identifiers:
# TVRage identifier unavailable, but invocation was valid.
return None
season_number = params.get('season_number', None)
identifier = tryInt(identifiers['tvrage'], None)
if identifier is None:
# Raise exception instead? Invocation is clearly wrong!
return None
cache_key = 'tvrage.cache.%s.%s' % (identifier, season_number)
log.debug('Getting TVRage SeasonInfo: %s', cache_key)
result = self.getCache(cache_key) or {}
if result:
return result
try:
show = self.tvrage[int(identifier)]
except (tvrage_exceptions.tvrage_error, IOError), e:
log.error('Failed parsing TVRage SeasonInfo for "%s" id "%s": %s', (show, identifier, traceback.format_exc()))
return False
result = []
for number, season in show.items():
if season_number is None:
result.append(self._parseSeason(show, number, season))
elif number == season_number:
result = self._parseSeason(show, number, season)
break
self.setCache(cache_key, result)
return result
def getEpisodeInfo(self, identifiers = None, params = {}):
"""Either return a list of all episodes or a single episode.
If episode_identifer contains an episode number to search for
"""
if not identifiers:
# Raise exception instead? Invocation is clearly wrong!
return None
if 'tvrage' not in identifiers:
# TVRage identifier unavailable, but invocation was valid.
return None
season_number = params.get('season_number', None)
episode_identifiers = params.get('episode_identifiers', None)
identifier = tryInt(identifiers['tvrage'], None)
if season_number is None:
# Raise exception instead? Invocation is clearly wrong!
return False
if identifier is None:
# season_identifier might contain the 'show id : season number'
# since there is no tvrage id for season and we need a reference to
# both the show id and season number.
try:
identifier, season_number = season_number.split(':')
season_number = int(season_number)
identifier = tryInt(identifier, None)
except:
pass
if identifier is None:
# Raise exception instead? Invocation is clearly wrong!
return None
episode_identifier = None
if episode_identifiers:
if 'tvrage' in episode_identifiers:
episode_identifier = tryInt(episode_identifiers['tvrage'], None)
if episode_identifier is None:
return None
cache_key = 'tvrage.cache.%s.%s.%s' % (identifier, episode_identifier, season_number)
log.debug('Getting TVRage EpisodeInfo: %s', cache_key)
result = self.getCache(cache_key) or {}
if result:
return result
try:
show = self.tvrage[int(identifier)]
except (tvrage_exceptions.tvrage_error, IOError), e:
log.error('Failed parsing TVRage EpisodeInfo for "%s" id "%s": %s', (show, identifier, traceback.format_exc()))
return False
result = []
for number, season in show.items():
if season_number is not None and number != season_number:
continue
for episode in season.values():
if episode_identifier is not None and episode['id'] == toUnicode(episode_identifier):
result = self._parseEpisode(episode)
self.setCache(cache_key, result)
return result
else:
result.append(self._parseEpisode(episode))
self.setCache(cache_key, result)
return result
def _parseShow(self, show):
#
# NOTE: tvrage_api mimics tvdb_api, but some information is unavailable
#
#
# NOTE: show object only allows direct access via
# show['id'], not show.get('id')
#
def get(name):
return show.get(name) if not hasattr(show, 'search') else show[name]
genres = splitString(get('genre'), '|')
classification = get('classification') or ''
if classification == 'Talk Shows':
# "Talk Show" is a genre on TheTVDB.com, as these types of shows,
# e.g. "The Tonight Show Starring Jimmy Fallon", often use
# different naming schemes, it might be useful to the searcher if
# it is added here.
genres.append('Talk Show')
if get('firstaired') is not None:
try: year = datetime.strptime(get('firstaired'), '%Y-%m-%d').year
except: year = None
else:
year = None
show_data = {
'identifiers': {
'tvrage': tryInt(get('id')),
},
'type': 'show',
'titles': [get('seriesname')],
'images': {
'poster': [],
'backdrop': [],
'poster_original': [],
'backdrop_original': [],
},
'year': year,
'genres': genres,
'network': get('network'),
'air_day': (get('airs_dayofweek') or '').lower(),
'air_time': self.parseTime(get('airs_time')),
'firstaired': get('firstaired'),
'runtime': tryInt(get('runtime')),
'status': get('status'),
}
show_data = dict((k, v) for k, v in show_data.iteritems() if v)
# Only load season info when available
if type(show) == Show:
# Parse season and episode data
show_data['seasons'] = {}
for season_nr in show:
season = self._parseSeason(show, season_nr, show[season_nr])
season['episodes'] = {}
for episode_nr in show[season_nr]:
season['episodes'][episode_nr] = self._parseEpisode(show[season_nr][episode_nr])
show_data['seasons'][season_nr] = season
return show_data
def _parseSeason(self, show, number, season):
season_data = {
'number': tryInt(number),
}
season_data = dict((k, v) for k, v in season_data.iteritems() if v)
return season_data
def _parseEpisode(self, episode):
def get(name, default = None):
return episode.get(name, default)
poster = get('filename', [])
episode_data = {
'number': tryInt(get('episodenumber')),
'absolute_number': tryInt(get('absolute_number')),
'identifiers': {
'tvrage': tryInt(episode['id'])
},
'type': 'episode',
'titles': [get('episodename')] if get('episodename') else [],
'images': {
'poster': [poster] if poster else [],
},
'released': get('firstaired'),
'firstaired': get('firstaired'),
'language': get('language'),
}
episode_data = dict((k, v) for k, v in episode_data.iteritems() if v)
return episode_data
def parseTime(self, time):
return time

216
couchpotato/core/media/show/providers/info/xem.py

@ -0,0 +1,216 @@
from couchpotato.core.event import addEvent
from couchpotato.core.logger import CPLog
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.media.show.providers.base import ShowProvider
log = CPLog(__name__)
autoload = 'Xem'
class Xem(ShowProvider):
'''
Mapping Information
===================
Single
------
You will need the id / identifier of the show e.g. tvdb-id for American Dad! is 73141
the origin is the name of the site/entity the episode, season (and/or absolute) numbers are based on
http://thexem.de/map/single?id=&origin=&episode=&season=&absolute=
episode, season and absolute are all optional but it wont work if you don't provide either episode and season OR absolute in
addition you can provide destination as the name of the wished destination, if not provided it will output all available
When a destination has two or more addresses another entry will be added as _ ... for now the second address gets the index "2"
(the first index is omitted) and so on
http://thexem.de/map/single?id=7529&origin=anidb&season=1&episode=2&destination=trakt
{
"result":"success",
"data":{
"trakt": {"season":1,"episode":3,"absolute":3},
"trakt_2":{"season":1,"episode":4,"absolute":4}
},
"message":"single mapping for 7529 on anidb."
}
All
---
Basically same as "single" just a little easier
The origin address is added into the output too!!
http://thexem.de/map/all?id=7529&origin=anidb
All Names
---------
Get all names xem has to offer
non optional params: origin(an entity string like 'tvdb')
optional params: season, language
- season: a season number or a list like: 1,3,5 or a compare operator like ne,gt,ge,lt,le,eq and a season number. default would
return all
- language: a language string like 'us' or 'jp' default is all
- defaultNames: 1(yes) or 0(no) should the default names be added to the list ? default is 0(no)
http://thexem.de/map/allNames?origin=tvdb&season=le1
{
"result": "success",
"data": {
"248812": ["Dont Trust the Bitch in Apartment 23", "Don't Trust the Bitch in Apartment 23"],
"257571": ["Nazo no Kanojo X"],
"257875": ["Lupin III - Mine Fujiko to Iu Onna", "Lupin III Fujiko to Iu Onna", "Lupin the Third - Mine Fujiko to Iu Onna"]
},
"message": ""
}
'''
def __init__(self):
addEvent('show.info', self.getShowInfo, priority = 5)
addEvent('episode.info', self.getEpisodeInfo, priority = 5)
self.config = {}
self.config['base_url'] = "http://thexem.de"
self.config['url_single'] = u"%(base_url)s/map/single?" % self.config
self.config['url_all'] = u"%(base_url)s/map/all?" % self.config
self.config['url_names'] = u"%(base_url)s/map/names?" % self.config
self.config['url_all_names'] = u"%(base_url)s/map/allNames?" % self.config
def getShowInfo(self, identifiers = None):
if self.isDisabled():
return {}
identifier = identifiers.get('thetvdb')
if not identifier:
return {}
cache_key = 'xem.cache.%s' % identifier
log.debug('Getting showInfo: %s', cache_key)
result = self.getCache(cache_key) or {}
if result:
return result
result['seasons'] = {}
# Create season/episode and absolute mappings
url = self.config['url_all'] + "id=%s&origin=tvdb" % tryUrlencode(identifier)
response = self.getJsonData(url)
if response and response.get('result') == 'success':
data = response.get('data', None)
self.parseMaps(result, data)
# Create name alias mappings
url = self.config['url_names'] + "id=%s&origin=tvdb" % tryUrlencode(identifier)
response = self.getJsonData(url)
if response and response.get('result') == 'success':
data = response.get('data', None)
self.parseNames(result, data)
self.setCache(cache_key, result)
return result
def getEpisodeInfo(self, identifiers = None, params = {}):
episode_num = params.get('episode_number', None)
if episode_num is None:
return False
season_num = params.get('season_number', None)
if season_num is None:
return False
result = self.getShowInfo(identifiers)
if not result:
return False
# Find season
if season_num not in result['seasons']:
return False
season = result['seasons'][season_num]
# Find episode
if episode_num not in season['episodes']:
return False
return season['episodes'][episode_num]
def parseMaps(self, result, data, master = 'tvdb'):
'''parses xem map and returns a custom formatted dict map
To retreive map for scene:
if 'scene' in map['map_episode'][1][1]:
print map['map_episode'][1][1]['scene']['season']
'''
if not isinstance(data, list):
return
for episode_map in data:
origin = episode_map.pop(master, None)
if origin is None:
continue # No master origin to map to
o_season = origin['season']
o_episode = origin['episode']
# Create season info
if o_season not in result['seasons']:
result['seasons'][o_season] = {}
season = result['seasons'][o_season]
if 'episodes' not in season:
season['episodes'] = {}
# Create episode info
if o_episode not in season['episodes']:
season['episodes'][o_episode] = {}
episode = season['episodes'][o_episode]
episode['episode_map'] = episode_map
def parseNames(self, result, data):
result['title_map'] = data.pop('all', None)
for season, title_map in data.items():
season = int(season)
# Create season info
if season not in result['seasons']:
result['seasons'][season] = {}
season = result['seasons'][season]
season['title_map'] = title_map
def isDisabled(self):
if __name__ == '__main__':
return False
if self.conf('enabled'):
return False
else:
return True
config = [{
'name': 'xem',
'groups': [
{
'tab': 'providers',
'name': 'xem',
'label': 'TheXem',
'hidden': True,
'description': 'Used for all calls to TheXem.',
'options': [
{
'name': 'enabled',
'default': True,
'label': 'Enabled',
},
],
},
],
}]

0
couchpotato/core/media/show/providers/nzb/__init__.py

51
couchpotato/core/media/show/providers/nzb/binsearch.py

@ -0,0 +1,51 @@
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.logger import CPLog
from couchpotato.core.event import fireEvent
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media._base.providers.nzb.binsearch import Base
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.environment import Env
log = CPLog(__name__)
autoload = 'BinSearch'
class BinSearch(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
def buildUrl(self, media, quality):
query = tryUrlencode({
'q': fireEvent('media.search_query', media, single = True),
'm': 'n',
'max': 400,
'adv_age': Env.setting('retention', 'nzb'),
'adv_sort': 'date',
'adv_col': 'on',
'adv_nfo': 'on',
'minsize': quality.get('size_min'),
'maxsize': quality.get('size_max'),
})
return query
class Episode(EpisodeProvider, Base):
def buildUrl(self, media, quality):
query = tryUrlencode({
'q': fireEvent('media.search_query', media, single = True),
'm': 'n',
'max': 400,
'adv_age': Env.setting('retention', 'nzb'),
'adv_sort': 'date',
'adv_col': 'on',
'adv_nfo': 'on',
'minsize': quality.get('size_min'),
'maxsize': quality.get('size_max'),
})
return query

49
couchpotato/core/media/show/providers/nzb/newznab.py

@ -0,0 +1,49 @@
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.event import fireEvent
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media._base.providers.nzb.newznab import Base
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
log = CPLog(__name__)
autoload = 'Newznab'
class Newznab(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
def buildUrl(self, media, host):
related = fireEvent('library.related', media, single = True)
identifier = fireEvent('library.identifier', media, single = True)
query = tryUrlencode({
't': 'tvsearch',
'apikey': host['api_key'],
'q': related['show']['title'],
'season': identifier['season'],
'extended': 1
})
return query
class Episode(EpisodeProvider, Base):
def buildUrl(self, media, host):
related = fireEvent('library.related', media, single = True)
identifier = fireEvent('library.identifier', media, single = True)
query = tryUrlencode({
't': 'tvsearch',
'apikey': host['api_key'],
'q': related['show']['title'],
'season': identifier['season'],
'ep': identifier['episode'],
'extended': 1
})
return query

52
couchpotato/core/media/show/providers/nzb/nzbclub.py

@ -0,0 +1,52 @@
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.logger import CPLog
from couchpotato.core.event import fireEvent
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.nzb.nzbclub import Base
log = CPLog(__name__)
autoload = 'NZBClub'
class NZBClub(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
def buildUrl(self, media):
q = tryUrlencode({
'q': fireEvent('media.search_query', media, single = True),
})
query = tryUrlencode({
'ig': 1,
'rpp': 200,
'st': 5,
'sp': 1,
'ns': 1,
})
return '%s&%s' % (q, query)
class Episode(EpisodeProvider, Base):
def buildUrl(self, media):
q = tryUrlencode({
'q': fireEvent('media.search_query', media, single = True),
})
query = tryUrlencode({
'ig': 1,
'rpp': 200,
'st': 5,
'sp': 1,
'ns': 1,
})
return '%s&%s' % (q, query)

0
couchpotato/core/media/show/providers/torrent/__init__.py

36
couchpotato/core/media/show/providers/torrent/bithdtv.py

@ -0,0 +1,36 @@
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.logger import CPLog
from couchpotato.core.event import fireEvent
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.torrent.bithdtv import Base
log = CPLog(__name__)
autoload = 'BiTHDTV'
class BiTHDTV(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
def buildUrl(self, media):
query = tryUrlencode({
'search': fireEvent('media.search_query', media, single = True),
'cat': 12 # Season cat
})
return query
class Episode(EpisodeProvider, Base):
def buildUrl(self, media):
query = tryUrlencode({
'search': fireEvent('media.search_query', media, single = True),
'cat': 10 # Episode cat
})
return query

41
couchpotato/core/media/show/providers/torrent/bitsoup.py

@ -0,0 +1,41 @@
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.logger import CPLog
from couchpotato.core.event import fireEvent
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.torrent.bitsoup import Base
log = CPLog(__name__)
autoload = 'Bitsoup'
class Bitsoup(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
# For season bundles, bitsoup currently only has one category
def buildUrl(self, media, quality):
query = tryUrlencode({
'search': fireEvent('media.search_query', media, single = True),
'cat': 45 # TV-Packs Category
})
return query
class Episode(EpisodeProvider, Base):
cat_ids = [
([42], ['hdtv_720p', 'webdl_720p', 'webdl_1080p', 'bdrip_1080p', 'bdrip_720p', 'brrip_1080p', 'brrip_720p']),
([49], ['hdtv_sd', 'webdl_480p'])
]
cat_backup_id = 0
def buildUrl(self, media, quality):
query = tryUrlencode({
'search': fireEvent('media.search_query', media, single = True),
'cat': self.getCatId(quality['identifier'])[0],
})
return query

24
couchpotato/core/media/show/providers/torrent/extratorrent.py

@ -0,0 +1,24 @@
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.torrent.extratorrent import Base
log = CPLog(__name__)
autoload = 'ExtraTorrent'
class ExtraTorrent(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
category = 8
class Episode(EpisodeProvider, Base):
category = 8

28
couchpotato/core/media/show/providers/torrent/iptorrents.py

@ -0,0 +1,28 @@
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.torrent.iptorrents import Base
log = CPLog(__name__)
autoload = 'IPTorrents'
class IPTorrents(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
cat_ids = [
([65], {}),
]
class Episode(EpisodeProvider, Base):
cat_ids = [
([4], {'codec': ['mp4-asp'], 'resolution': ['sd'], 'source': ['hdtv', 'web']}),
([5], {'codec': ['mp4-avc'], 'resolution': ['720p', '1080p'], 'source': ['hdtv', 'web']}),
([78], {'codec': ['mp4-avc'], 'resolution': ['480p'], 'source': ['hdtv', 'web']}),
([79], {'codec': ['mp4-avc'], 'resolution': ['sd'], 'source': ['hdtv', 'web']})
]

34
couchpotato/core/media/show/providers/torrent/kickasstorrents.py

@ -0,0 +1,34 @@
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.torrent.kickasstorrents import Base
log = CPLog(__name__)
autoload = 'KickAssTorrents'
class KickAssTorrents(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
urls = {
'detail': '%s/%%s',
'search': '%s/usearch/%s category:tv/%d/',
}
# buildUrl does not need an override
class Episode(EpisodeProvider, Base):
urls = {
'detail': '%s/%%s',
'search': '%s/usearch/%s category:tv/%d/',
}
# buildUrl does not need an override

60
couchpotato/core/media/show/providers/torrent/sceneaccess.py

@ -0,0 +1,60 @@
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.event import fireEvent
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.torrent.sceneaccess import Base
log = CPLog(__name__)
autoload = 'SceneAccess'
class SceneAccess(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
cat_ids = [
([26], ['hdtv_sd', 'hdtv_720p', 'webdl_720p', 'webdl_1080p']),
]
def buildUrl(self, media, quality):
url = self.urls['archive'] % (
self.getCatId(quality['identifier'])[0],
self.getCatId(quality['identifier'])[0]
)
arguments = tryUrlencode({
'search': fireEvent('media.search_query', media, single = True),
'method': 3,
})
query = "%s&%s" % (url, arguments)
return query
class Episode(EpisodeProvider, Base):
cat_ids = [
([27], ['hdtv_720p', 'webdl_720p', 'webdl_1080p']),
([17, 11], ['hdtv_sd'])
]
def buildUrl(self, media, quality):
url = self.urls['search'] % (
self.getCatId(quality['identifier'])[0],
self.getCatId(quality['identifier'])[0]
)
arguments = tryUrlencode({
'search': fireEvent('media.search_query', media, single = True),
'method': 3,
})
query = "%s&%s" % (url, arguments)
return query

46
couchpotato/core/media/show/providers/torrent/thepiratebay.py

@ -0,0 +1,46 @@
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.logger import CPLog
from couchpotato.core.event import fireEvent
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.torrent.thepiratebay import Base
log = CPLog(__name__)
autoload = 'ThePirateBay'
class ThePirateBay(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
cat_ids = [
([208], ['hdtv_720p', 'webdl_720p', 'webdl_1080p']),
([205], ['hdtv_sd'])
]
def buildUrl(self, media, page, cats):
return (
tryUrlencode('"%s"' % fireEvent('library.query', media, single = True)),
page,
','.join(str(x) for x in cats)
)
class Episode(EpisodeProvider, Base):
cat_ids = [
([208], ['hdtv_720p', 'webdl_720p', 'webdl_1080p']),
([205], ['hdtv_sd'])
]
def buildUrl(self, media, page, cats):
return (
tryUrlencode('"%s"' % fireEvent('library.query', media, single = True)),
page,
','.join(str(x) for x in cats)
)

34
couchpotato/core/media/show/providers/torrent/torrentday.py

@ -0,0 +1,34 @@
from couchpotato.core.logger import CPLog
from couchpotato.core.event import fireEvent
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.torrent.torrentday import Base
log = CPLog(__name__)
autoload = 'TorrentDay'
class TorrentDay(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
cat_ids = [
([14], ['hdtv_sd', 'hdtv_720p', 'webdl_720p', 'webdl_1080p']),
]
def buildUrl(self, media):
return fireEvent('media.search_query', media, single = True)
class Episode(EpisodeProvider, Base):
cat_ids = [
([7], ['hdtv_720p', 'webdl_720p', 'webdl_1080p']),
([2], [24], [26], ['hdtv_sd'])
]
def buildUrl(self, media):
return fireEvent('media.search_query', media, single = True)

42
couchpotato/core/media/show/providers/torrent/torrentleech.py

@ -0,0 +1,42 @@
from couchpotato import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.torrent.torrentleech import Base
log = CPLog(__name__)
autoload = 'TorrentLeech'
class TorrentLeech(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
cat_ids = [
([27], ['hdtv_sd', 'hdtv_720p', 'webdl_720p', 'webdl_1080p']),
]
def buildUrl(self, media, quality):
return (
tryUrlencode(fireEvent('media.search_query', media, single = True)),
self.getCatId(quality['identifier'])[0]
)
class Episode(EpisodeProvider, Base):
cat_ids = [
([32], ['hdtv_720p', 'webdl_720p', 'webdl_1080p']),
([26], ['hdtv_sd'])
]
def buildUrl(self, media, quality):
return (
tryUrlencode(fireEvent('media.search_query', media, single = True)),
self.getCatId(quality['identifier'])[0]
)

38
couchpotato/core/media/show/providers/torrent/torrentpotato.py

@ -0,0 +1,38 @@
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.logger import CPLog
from couchpotato.core.event import fireEvent
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.torrent.torrentpotato import Base
log = CPLog(__name__)
autoload = 'TorrentPotato'
class TorrentPotato(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
def buildUrl(self, media, host):
arguments = tryUrlencode({
'user': host['name'],
'passkey': host['pass_key'],
'search': fireEvent('media.search_query', media, single = True)
})
return '%s?%s' % (host['host'], arguments)
class Episode(EpisodeProvider, Base):
def buildUrl(self, media, host):
arguments = tryUrlencode({
'user': host['name'],
'passkey': host['pass_key'],
'search': fireEvent('media.search_query', media, single = True)
})
return '%s?%s' % (host['host'], arguments)

52
couchpotato/core/media/show/providers/torrent/torrentshack.py

@ -0,0 +1,52 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.base import MultiProvider
from couchpotato.core.media.show.providers.base import SeasonProvider, EpisodeProvider
from couchpotato.core.media._base.providers.torrent.torrentshack import Base
log = CPLog(__name__)
autoload = 'TorrentShack'
class TorrentShack(MultiProvider):
def getTypes(self):
return [Season, Episode]
class Season(SeasonProvider, Base):
# TorrentShack tv season search categories
# TV-SD Pack - 980
# TV-HD Pack - 981
# Full Blu-ray - 970
cat_ids = [
([980], ['hdtv_sd']),
([981], ['hdtv_720p', 'webdl_720p', 'webdl_1080p', 'bdrip_1080p', 'bdrip_720p', 'brrip_1080p', 'brrip_720p']),
([970], ['bluray_1080p', 'bluray_720p']),
]
cat_backup_id = 980
def buildUrl(self, media, quality):
query = (tryUrlencode(fireEvent('media.search_query', media, single = True)),
self.getCatId(quality['identifier'])[0],
self.getSceneOnly())
return query
class Episode(EpisodeProvider, Base):
# TorrentShack tv episode search categories
# TV/x264-HD - 600
# TV/x264-SD - 620
# TV/DVDrip - 700
cat_ids = [
([600], ['hdtv_720p', 'webdl_720p', 'webdl_1080p', 'bdrip_1080p', 'bdrip_720p', 'brrip_1080p', 'brrip_720p']),
([620], ['hdtv_sd'])
]
cat_backup_id = 620
def buildUrl(self, media, quality):
query = (tryUrlencode(fireEvent('media.search_query', media, single = True)),
self.getCatId(quality['identifier'])[0],
self.getSceneOnly())
return query

0
couchpotato/core/media/show/quality/__init__.py

196
couchpotato/core/media/show/quality/main.py

@ -0,0 +1,196 @@
from caper import Caper
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.variable import getExt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.quality.base import QualityBase
log = CPLog(__name__)
autoload = 'ShowQuality'
class ShowQuality(QualityBase):
type = 'show'
properties = {
'codec': [
{'identifier': 'mp2', 'label': 'MPEG-2/H.262', 'value': ['mpeg2']},
{'identifier': 'mp4-asp', 'label': 'MPEG-4 ASP', 'value': ['divx', 'xvid']},
{'identifier': 'mp4-avc', 'label': 'MPEG-4 AVC/H.264', 'value': ['avc', 'h264', 'x264', ('h', '264')]},
],
'container': [
{'identifier': 'avi', 'label': 'AVI', 'value': ['avi']},
{'identifier': 'mov', 'label': 'QuickTime Movie', 'value': ['mov']},
{'identifier': 'mpeg-4', 'label': 'MPEG-4', 'value': ['m4v', 'mp4']},
{'identifier': 'mpeg-ts', 'label': 'MPEG-TS', 'value': ['m2ts', 'ts']},
{'identifier': 'mkv', 'label': 'Matroska', 'value': ['mkv']},
{'identifier': 'wmv', 'label': 'Windows Media Video', 'value': ['wmv']}
],
'resolution': [
# TODO interlaced resolutions (auto-fill these options?)
{'identifier': 'sd'},
{'identifier': '480p', 'width': 853, 'height': 480},
{'identifier': '576p', 'width': 1024, 'height': 576},
{'identifier': '720p', 'width': 1280, 'height': 720},
{'identifier': '1080p', 'width': 1920, 'height': 1080}
],
'source': [
{'identifier': 'cam', 'label': 'Cam', 'value': ['camrip', 'hdcam']},
{'identifier': 'hdtv', 'label': 'HDTV', 'value': ['hdtv']},
{'identifier': 'screener', 'label': 'Screener', 'value': ['screener', 'dvdscr', 'ppvrip', 'dvdscreener', 'hdscr']},
{'identifier': 'web', 'label': 'Web', 'value': ['webrip', ('web', 'rip'), 'webdl', ('web', 'dl')]}
]
}
qualities = [
# TODO sizes will need to be adjusted for season packs
# resolutions
{'identifier': '1080p', 'label': '1080p', 'size': (1000, 25000), 'codec': ['mp4-avc'], 'container': ['mpeg-ts', 'mkv'], 'resolution': ['1080p']},
{'identifier': '720p', 'label': '720p', 'size': (1000, 5000), 'codec': ['mp4-avc'], 'container': ['mpeg-ts', 'mkv'], 'resolution': ['720p']},
{'identifier': '480p', 'label': '480p', 'size': (800, 5000), 'codec': ['mp4-avc'], 'container': ['mpeg-ts', 'mkv'], 'resolution': ['480p']},
# sources
{'identifier': 'cam', 'label': 'Cam', 'size': (800, 5000), 'source': ['cam']},
{'identifier': 'hdtv', 'label': 'HDTV', 'size': (800, 5000), 'source': ['hdtv']},
{'identifier': 'screener', 'label': 'Screener', 'size': (800, 5000), 'source': ['screener']},
{'identifier': 'web', 'label': 'Web', 'size': (800, 5000), 'source': ['web']},
]
def __init__(self):
super(ShowQuality, self).__init__()
addEvent('quality.guess', self.guess)
self.caper = Caper()
def guess(self, files, extra = None, size = None, types = None):
if types and self.type not in types:
return
log.debug('Trying to guess quality of: %s', files)
if not extra: extra = {}
# Create hash for cache
cache_key = str([f.replace('.' + getExt(f), '') if len(getExt(f)) < 4 else f for f in files])
cached = self.getCache(cache_key)
if cached and len(extra) == 0:
return cached
qualities = self.all()
# Score files against each quality
score = self.score(files, qualities = qualities)
if score is None:
return None
# Return nothing if all scores are <= 0
has_non_zero = 0
for s in score:
if score[s]['score'] > 0:
has_non_zero += 1
if not has_non_zero:
return None
heighest_quality = max(score, key = lambda p: score[p]['score'])
if heighest_quality:
for quality in qualities:
if quality.get('identifier') == heighest_quality:
quality['is_3d'] = False
if score[heighest_quality].get('3d'):
quality['is_3d'] = True
return self.setCache(cache_key, quality)
return None
def score(self, files, qualities = None, types = None):
if types and self.type not in types:
return None
if not qualities:
qualities = self.all()
qualities_expanded = [self.expand(q.copy()) for q in qualities]
# Start with 0
score = {}
for quality in qualities:
score[quality.get('identifier')] = {
'score': 0,
'3d': {}
}
for cur_file in files:
match = self.caper.parse(cur_file, 'scene')
if len(match.chains) < 1:
log.info2('Unable to parse "%s", ignoring file')
continue
chain = match.chains[0]
for quality in qualities_expanded:
property_score = self.propertyScore(quality, chain)
self.calcScore(score, quality, property_score)
return score
def propertyScore(self, quality, chain):
score = 0
if 'video' not in chain.info:
return 0
info = fireEvent('matcher.flatten_info', chain.info['video'], single = True)
for key in ['codec', 'resolution', 'source']:
if key not in quality:
# No specific property required
score += 5
continue
available = list(self.getInfo(info, key))
found = False
for property in quality[key]:
required = property['value'] if 'value' in property else [property['identifier']]
if set(available) & set(required):
score += 10
found = True
break
if not found:
score -= 10
return score
def getInfo(self, info, key):
for value in info.get(key, []):
if isinstance(value, list):
yield tuple([x.lower() for x in value])
else:
yield value.lower()
def calcScore(self, score, quality, add_score, threedscore = (0, None), penalty = True):
score[quality['identifier']]['score'] += add_score
# Set order for allow calculation (and cache)
if not self.cached_order:
self.cached_order = {}
for q in self.qualities:
self.cached_order[q.get('identifier')] = self.qualities.index(q)
if penalty and add_score != 0:
for allow in quality.get('allow', []):
score[allow]['score'] -= 40 if self.cached_order[allow] < self.cached_order[quality['identifier']] else 5
# Give panelty for all lower qualities
for q in self.qualities[self.order.index(quality.get('identifier'))+1:]:
if score.get(q.get('identifier')):
score[q.get('identifier')]['score'] -= 1

0
couchpotato/core/media/show/searcher/__init__.py

109
couchpotato/core/media/show/searcher/episode.py

@ -0,0 +1,109 @@
import time
from couchpotato import fireEvent, get_db, Env
from couchpotato.api import addApiView
from couchpotato.core.event import addEvent, fireEventAsync
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.searcher.main import Searcher
from couchpotato.core.media._base.searcher.main import SearchSetupError
from couchpotato.core.media.show import ShowTypeBase
from couchpotato.core.helpers.variable import strtotime
log = CPLog(__name__)
autoload = 'EpisodeSearcher'
class EpisodeSearcher(Searcher, ShowTypeBase):
type = 'episode'
in_progress = False
def __init__(self):
super(EpisodeSearcher, self).__init__()
addEvent('%s.searcher.all' % self.getType(), self.searchAll)
addEvent('%s.searcher.single' % self.getType(), self.single)
addEvent('searcher.correct_release', self.correctRelease)
addApiView('%s.searcher.full_search' % self.getType(), self.searchAllView, docs = {
'desc': 'Starts a full search for all wanted shows',
})
addApiView('%s.searcher.single' % self.getType(), self.singleView)
def searchAllView(self, **kwargs):
fireEventAsync('%s.searcher.all' % self.getType(), manual = True)
return {
'success': not self.in_progress
}
def searchAll(self, manual = False):
pass
def singleView(self, media_id, **kwargs):
db = get_db()
media = db.get('id', media_id)
return {
'result': fireEvent('%s.searcher.single' % self.getType(), media, single = True)
}
def correctRelease(self, release = None, media = None, quality = None, **kwargs):
if media.get('type') != 'show.episode': return
retention = Env.setting('retention', section = 'nzb')
if release.get('seeders') is None and 0 < retention < release.get('age', 0):
log.info2('Wrong: Outside retention, age is %s, needs %s or lower: %s', (release['age'], retention, release['name']))
return False
# Check for required and ignored words
if not self.correctWords(release['name'], media):
return False
preferred_quality = quality if quality else fireEvent('quality.single', identifier = quality['identifier'], single = True)
# Contains lower quality string
contains_other = self.containsOtherQuality(release, preferred_quality = preferred_quality, types= [self._type])
if contains_other != False:
log.info2('Wrong: %s, looking for %s, found %s', (release['name'], quality['label'], [x for x in contains_other] if contains_other else 'no quality'))
return False
# TODO Matching is quite costly, maybe we should be caching release matches somehow? (also look at caper optimizations)
match = fireEvent('matcher.match', release, media, quality, single = True)
if match:
return match.weight
return False
def couldBeReleased(self, is_pre_release, dates, media):
"""
Determine if episode could have aired by now
@param is_pre_release: True if quality is pre-release, otherwise False. Ignored for episodes.
@param dates:
@param media: media dictionary to retrieve episode air date from.
@return: dict, with media
"""
now = time.time()
released = strtotime(media.get('info', {}).get('released'), '%Y-%m-%d')
if (released < now):
return True
return False
def getProfileId(self, media):
assert media and media['type'] == 'show.episode'
profile_id = None
related = fireEvent('library.related', media, single = True)
if related:
show = related.get('show')
if show:
profile_id = show.get('profile_id')
return profile_id

137
couchpotato/core/media/show/searcher/season.py

@ -0,0 +1,137 @@
from couchpotato import get_db, Env
from couchpotato.api import addApiView
from couchpotato.core.event import addEvent, fireEventAsync, fireEvent
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.searcher.main import Searcher
from couchpotato.core.media.movie.searcher import SearchSetupError
from couchpotato.core.media.show import ShowTypeBase
from couchpotato.core.helpers.variable import getTitle
log = CPLog(__name__)
autoload = 'SeasonSearcher'
class SeasonSearcher(Searcher, ShowTypeBase):
type = 'season'
in_progress = False
def __init__(self):
super(SeasonSearcher, self).__init__()
addEvent('%s.searcher.all' % self.getType(), self.searchAll)
addEvent('%s.searcher.single' % self.getType(), self.single)
addEvent('searcher.correct_release', self.correctRelease)
addApiView('%s.searcher.full_search' % self.getType(), self.searchAllView, docs = {
'desc': 'Starts a full search for all wanted seasons',
})
def searchAllView(self, **kwargs):
fireEventAsync('%s.searcher.all' % self.getType(), manual = True)
return {
'success': not self.in_progress
}
def searchAll(self, manual = False):
pass
def single(self, media, search_protocols = None, manual = False, force_download = False, notify = True):
# The user can prefer episode releases over season releases.
prefer_episode_releases = self.conf('prefer_episode_releases')
episodes = []
all_episodes_available = self.couldBeReleased(False, [], media)
event_type = 'show.season.searcher.started'
related = fireEvent('library.related', media, single = True)
default_title = getTitle(related.get('show'))
fireEvent('notify.frontend', type = event_type, data = {'_id': media['_id']}, message = 'Searching for "%s"' % default_title)
result = False
if not all_episodes_available or prefer_episode_releases:
result = True
for episode in episodes:
if not fireEvent('show.episode.searcher.single', episode, search_protocols, manual, force_download, False):
result = False
break
if not result and all_episodes_available:
# The user might have preferred episode releases over season
# releases, but that did not work out, fallback to season releases.
result = super(SeasonSearcher, self).single(media, search_protocols, manual, force_download, False)
event_type = 'show.season.searcher.ended'
fireEvent('notify.frontend', type = event_type, data = {'_id': media['_id']})
return result
def correctRelease(self, release = None, media = None, quality = None, **kwargs):
if media.get('type') != 'show.season':
return
retention = Env.setting('retention', section = 'nzb')
if release.get('seeders') is None and 0 < retention < release.get('age', 0):
log.info2('Wrong: Outside retention, age is %s, needs %s or lower: %s', (release['age'], retention, release['name']))
return False
# Check for required and ignored words
if not self.correctWords(release['name'], media):
return False
preferred_quality = quality if quality else fireEvent('quality.single', identifier = quality['identifier'], single = True)
# Contains lower quality string
contains_other = self.containsOtherQuality(release, preferred_quality = preferred_quality, types = [self._type])
if contains_other != False:
log.info2('Wrong: %s, looking for %s, found %s', (release['name'], quality['label'], [x for x in contains_other] if contains_other else 'no quality'))
return False
# TODO Matching is quite costly, maybe we should be caching release matches somehow? (also look at caper optimizations)
match = fireEvent('matcher.match', release, media, quality, single = True)
if match:
return match.weight
return False
def couldBeReleased(self, is_pre_release, dates, media):
episodes = []
all_episodes_available = True
related = fireEvent('library.related', media, single = True)
if related:
for episode in related.get('episodes', []):
if episode.get('status') == 'active':
episodes.append(episode)
else:
all_episodes_available = False
if not episodes:
all_episodes_available = False
return all_episodes_available
def getTitle(self, media):
# FIXME: Season media type should have a title.
# e.g. <Show> Season <Number>
title = None
related = fireEvent('library.related', media, single = True)
if related:
title = getTitle(related.get('show'))
return title
def getProfileId(self, media):
assert media and media['type'] == 'show.season'
profile_id = None
related = fireEvent('library.related', media, single = True)
if related:
show = related.get('show')
if show:
profile_id = show.get('profile_id')
return profile_id

93
couchpotato/core/media/show/searcher/show.py

@ -0,0 +1,93 @@
from couchpotato import get_db
from couchpotato.api import addApiView
from couchpotato.core.event import fireEvent, addEvent, fireEventAsync
from couchpotato.core.helpers.variable import getTitle
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.searcher.main import Searcher
from couchpotato.core.media._base.searcher.main import SearchSetupError
from couchpotato.core.media.show import ShowTypeBase
log = CPLog(__name__)
autoload = 'ShowSearcher'
class ShowSearcher(Searcher, ShowTypeBase):
type = 'show'
in_progress = False
def __init__(self):
super(ShowSearcher, self).__init__()
addEvent('%s.searcher.all' % self.getType(), self.searchAll)
addEvent('%s.searcher.single' % self.getType(), self.single)
addEvent('searcher.get_search_title', self.getSearchTitle)
addApiView('%s.searcher.full_search' % self.getType(), self.searchAllView, docs = {
'desc': 'Starts a full search for all wanted episodes',
})
def searchAllView(self, **kwargs):
fireEventAsync('%s.searcher.all' % self.getType(), manual = True)
return {
'success': not self.in_progress
}
def searchAll(self, manual = False):
pass
def single(self, media, search_protocols = None, manual = False, force_download = False, notify = True):
db = get_db()
profile = db.get('id', media['profile_id'])
if not profile or (media['status'] == 'done' and not manual):
log.debug('Media does not have a profile or already done, assuming in manage tab.')
fireEvent('media.restatus', media['_id'], single = True)
return
default_title = getTitle(media)
if not default_title:
log.error('No proper info found for media, removing it from library to stop it from causing more issues.')
fireEvent('media.delete', media['_id'], single = True)
return
fireEvent('notify.frontend', type = 'show.searcher.started.%s' % media['_id'], data = True, message = 'Searching for "%s"' % default_title)
seasons = []
tree = fireEvent('library.tree', media, single = True)
if tree:
for season in tree.get('seasons', []):
if season.get('info'):
continue
# Skip specials (and seasons missing 'number') for now
# TODO: set status for specials to skipped by default
if not season['info'].get('number'):
continue
seasons.append(season)
result = True
for season in seasons:
if not fireEvent('show.season.searcher.single', search_protocols, manual, force_download, False):
result = False
break
fireEvent('notify.frontend', type = 'show.searcher.ended.%s' % media['_id'], data = True)
return result
def getSearchTitle(self, media):
show = None
if media.get('type') == 'show':
show = media
elif media.get('type') in ('show.season', 'show.episode'):
related = fireEvent('library.related', media, single = True)
show = related['show']
if show:
return getTitle(show)

5
couchpotato/core/plugins/dashboard.py

@ -76,9 +76,10 @@ class Dashboard(Plugin):
coming_soon = False
# Theater quality
if pp.get('theater') and fireEvent('movie.searcher.could_be_released', True, eta, media['info']['year'], single = True):
event = '%s.searcher.could_be_released' % (media.get('type'))
if pp.get('theater') and fireEvent(event, True, eta, media, single = True):
coming_soon = 'theater'
elif pp.get('dvd') and fireEvent('movie.searcher.could_be_released', False, eta, media['info']['year'], single = True):
elif pp.get('dvd') and fireEvent(event, False, eta, media, single = True):
coming_soon = 'dvd'
if coming_soon:

5
couchpotato/core/plugins/quality/__init__.py

@ -1,5 +0,0 @@
from .main import QualityPlugin
def autoload():
return QualityPlugin()

19
couchpotato/core/plugins/score/main.py

@ -1,4 +1,4 @@
from couchpotato.core.event import addEvent
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.helpers.variable import getTitle, splitString, removeDuplicate
from couchpotato.core.logger import CPLog
@ -16,17 +16,20 @@ class Score(Plugin):
def __init__(self):
addEvent('score.calculate', self.calculate)
def calculate(self, nzb, movie):
def calculate(self, nzb, media):
""" Calculate the score of a NZB, used for sorting later """
# Fetch root media item (movie, show)
root = fireEvent('library.root', media, single = True)
# Merge global and category
preferred_words = splitString(Env.setting('preferred_words', section = 'searcher').lower())
try: preferred_words = removeDuplicate(preferred_words + splitString(movie['category']['preferred'].lower()))
try: preferred_words = removeDuplicate(preferred_words + splitString(media['category']['preferred'].lower()))
except: pass
score = nameScore(toUnicode(nzb['name']), movie['info']['year'], preferred_words)
score = nameScore(toUnicode(nzb['name']), root['info'].get('year'), preferred_words)
for movie_title in movie['info']['titles']:
for movie_title in root['info']['titles']:
score += nameRatioScore(toUnicode(nzb['name']), toUnicode(movie_title))
score += namePositionScore(toUnicode(nzb['name']), toUnicode(movie_title))
@ -44,15 +47,15 @@ class Score(Plugin):
score += providerScore(nzb['provider'])
# Duplicates in name
score += duplicateScore(nzb['name'], getTitle(movie))
score += duplicateScore(nzb['name'], getTitle(root))
# Merge global and category
ignored_words = splitString(Env.setting('ignored_words', section = 'searcher').lower())
try: ignored_words = removeDuplicate(ignored_words + splitString(movie['category']['ignored'].lower()))
try: ignored_words = removeDuplicate(ignored_words + splitString(media['category']['ignored'].lower()))
except: pass
# Partial ignored words
score += partialIgnoredScore(nzb['name'], getTitle(movie), ignored_words)
score += partialIgnoredScore(nzb['name'], getTitle(root), ignored_words)
# Ignore single downloads from multipart
score += halfMultipartScore(nzb['name'])

4
couchpotato/core/plugins/score/scores.py

@ -76,7 +76,7 @@ def namePositionScore(nzb_name, movie_name):
score = 0
nzb_words = re.split('\W+', simplifyString(nzb_name))
qualities = fireEvent('quality.all', single = True)
qualities = fireEvent('quality.all', merge = True)
try:
nzb_name = re.search(r'([\'"])[^\1]*\1', nzb_name).group(0)
@ -108,7 +108,7 @@ def namePositionScore(nzb_name, movie_name):
found_quality = quality['identifier']
# Alt in words
for alt in quality['alternative']:
for alt in quality.get('alternative', []):
if alt in nzb_words:
found_quality = alt
break

9
couchpotato/static/scripts/block/navigation.js

@ -20,6 +20,15 @@ var BlockNavigation = new Class({
},
removeTab: function(name) {
var self = this;
var element = self.nav.getElement('li.tab_'+name);
if (element) {
element.dispose()
}
},
activate: function(name){
var self = this;

14
couchpotato/static/scripts/combined.base.min.js

@ -265,6 +265,13 @@ var CouchPotato = new Class({
setting_links.each(function(a) {
self.block.more.addLink(a);
});
var support_classes = [];
self.options.support.each(function(support) {
if (support) {
support_classes.push("support_" + support);
}
});
document.body.addClass(support_classes.join(" "));
new ScrollSpy({
min: 10,
onLeave: function() {
@ -825,6 +832,13 @@ var BlockNavigation = new Class({
var self = this;
return new Element("li.tab_" + (name || "unknown")).grab(new Element("a", tab)).inject(self.nav);
},
removeTab: function(name) {
var self = this;
var element = self.nav.getElement("li.tab_" + name);
if (element) {
element.dispose();
}
},
activate: function(name) {
var self = this;
self.nav.getElements(".active").removeClass("active");

800
couchpotato/static/scripts/combined.plugins.min.js

@ -176,6 +176,185 @@ window.addEvent("domready", function() {
new PutIODownloader();
});
var QualityBase = new Class({
tab: "",
content: "",
setup: function(data) {
var self = this;
self.qualities = data.qualities;
self.profiles_list = null;
self.profiles = [];
Array.each(data.profiles, self.createProfilesClass.bind(self));
App.addEvent("loadSettings", self.addSettings.bind(self));
},
getProfile: function(id) {
return this.profiles.filter(function(profile) {
return profile.data._id == id;
}).pick();
},
getActiveProfiles: function() {
return Array.filter(this.profiles, function(profile) {
return !profile.data.hide;
});
},
getQuality: function(identifier) {
try {
return this.qualities.filter(function(q) {
return q.identifier == identifier;
}).pick();
} catch (e) {}
return {};
},
addSettings: function() {
var self = this;
self.settings = App.getPage("Settings");
self.settings.addEvent("create", function() {
var tab = self.settings.createSubTab("profile", {
label: "Quality",
name: "profile",
subtab_label: "Qualities"
}, self.settings.tabs.searcher, "searcher");
self.tab = tab.tab;
self.content = tab.content;
self.createProfiles();
self.createProfileOrdering();
self.createSizes();
});
},
createProfiles: function() {
var self = this;
var non_core_profiles = Array.filter(self.profiles, function(profile) {
return !profile.isCore();
});
var count = non_core_profiles.length;
self.settings.createGroup({
label: "Quality Profiles",
description: "Create your own profiles with multiple qualities."
}).inject(self.content).adopt(self.profile_container = new Element("div.container"), new Element("a.add_new_profile", {
text: count > 0 ? "Create another quality profile" : "Click here to create a quality profile.",
events: {
click: function() {
var profile = self.createProfilesClass();
$(profile).inject(self.profile_container);
}
}
}));
Array.each(non_core_profiles, function(profile) {
$(profile).inject(self.profile_container);
});
},
createProfilesClass: function(data) {
var self = this;
data = data || {
id: randomString()
};
var profile = new Profile(data);
self.profiles.include(profile);
return profile;
},
createProfileOrdering: function() {
var self = this;
self.settings.createGroup({
label: "Profile Defaults",
description: "(Needs refresh '" + (App.isMac() ? "CMD+R" : "F5") + "' after editing)"
}).grab(new Element(".ctrlHolder#profile_ordering").adopt(new Element("label[text=Order]"), self.profiles_list = new Element("ul"), new Element("p.formHint", {
html: "Change the order the profiles are in the dropdown list. Uncheck to hide it completely.<br />First one will be default."
}))).inject(self.content);
Array.each(self.profiles, function(profile) {
var check;
new Element("li", {
"data-id": profile.data._id
}).adopt(check = new Element("input[type=checkbox]", {
checked: !profile.data.hide,
events: {
change: self.saveProfileOrdering.bind(self)
}
}), new Element("span.profile_label", {
text: profile.data.label
}), new Element("span.handle.icon-handle")).inject(self.profiles_list);
});
var sorted_changed = false;
self.profile_sortable = new Sortables(self.profiles_list, {
revert: true,
handle: ".handle",
opacity: .5,
onSort: function() {
sorted_changed = true;
},
onComplete: function() {
if (sorted_changed) {
self.saveProfileOrdering();
sorted_changed = false;
}
}
});
},
saveProfileOrdering: function() {
var self = this, ids = [], hidden = [];
self.profiles_list.getElements("li").each(function(el, nr) {
ids.include(el.get("data-id"));
hidden[nr] = +!el.getElement("input[type=checkbox]").get("checked");
});
Api.request("profile.save_order", {
data: {
ids: ids,
hidden: hidden
}
});
},
createSizes: function() {
var self = this;
var group = self.settings.createGroup({
label: "Sizes",
description: "Edit the minimal and maximum sizes (in MB) for each quality.",
advanced: true,
name: "sizes"
}).inject(self.content);
new Element("div.item.head.ctrlHolder").adopt(new Element("span.label", {
text: "Quality"
}), new Element("span.min", {
text: "Min"
}), new Element("span.max", {
text: "Max"
})).inject(group);
Array.each(self.qualities, function(quality) {
new Element("div.ctrlHolder.item").adopt(new Element("span.label", {
text: quality.label
}), new Element("input.min[type=text]", {
value: quality.size_min,
events: {
keyup: function(e) {
self.changeSize(quality.identifier, "size_min", e.target.get("value"));
}
}
}), new Element("input.max[type=text]", {
value: quality.size_max,
events: {
keyup: function(e) {
self.changeSize(quality.identifier, "size_max", e.target.get("value"));
}
}
})).inject(group);
});
},
size_timer: {},
changeSize: function(identifier, type, value) {
var self = this;
if (self.size_timer[identifier + type]) clearTimeout(self.size_timer[identifier + type]);
self.size_timer[identifier + type] = function() {
Api.request("quality.size.save", {
data: {
identifier: identifier,
value_type: type,
value: value
}
});
}.delay(300);
}
});
window.Quality = new QualityBase();
var BlockSearch = new Class({
Extends: BlockBase,
options: {
@ -427,6 +606,8 @@ var MovieDetails = new Class({
var MovieList = new Class({
Implements: [ Events, Options ],
media_type: "movie",
list_key: "movies",
options: {
api_call: "media.list",
navigation: true,
@ -855,7 +1036,7 @@ var MovieList = new Class({
}
Api.request(self.options.api_call, {
data: Object.merge({
type: self.options.type || "movie",
type: self.media_type || "movie",
status: self.options.status,
limit_offset: self.options.limit ? self.options.limit + "," + self.offset : null
}, self.filter),
@ -871,8 +1052,9 @@ var MovieList = new Class({
}, 1e3);
self.el.setStyle("min-height", null);
}
self.store(json.movies);
self.addMovies(json.movies, json.total || json.movies.length);
var items = json[self.list_key] || [];
self.store(items);
self.addMovies(items, json.total || items.length);
if (self.scrollspy) {
self.load_more.set("text", "load more movies");
self.scrollspy.start();
@ -2083,6 +2265,7 @@ Page.Movies = new Class({
var BlockSearchMovieItem = new Class({
Implements: [ Options, Events ],
media_type: "movie",
initialize: function(info, options) {
var self = this;
self.setOptions(options);
@ -2156,7 +2339,7 @@ var BlockSearchMovieItem = new Class({
var self = this;
if (e) e.preventDefault();
self.loadingMask();
Api.request("movie.add", {
Api.request(self.media_type + ".add", {
data: {
identifier: self.info.imdb,
title: self.title_select.get("value"),
@ -2465,80 +2648,384 @@ var TraktAutomation = new Class({
new TraktAutomation();
var NotificationBase = new Class({
var Episode = new Class({
Extends: BlockBase,
Implements: [ Options, Events ],
initialize: function(options) {
action: {},
initialize: function(show, options, data) {
var self = this;
self.setOptions(options);
App.addEvent("unload", self.stopPoll.bind(self));
App.addEvent("reload", self.startInterval.bind(self, [ true ]));
App.on("notification", self.notify.bind(self));
App.on("message", self.showMessage.bind(self));
App.addEvent("loadSettings", self.addTestButtons.bind(self));
self.notifications = [];
App.addEvent("load", function() {
App.block.notification = new BlockMenu(self, {
button_class: "icon-notifications",
class: "notification_menu",
onOpen: self.markAsRead.bind(self)
});
$(App.block.notification).inject(App.getBlock("search"), "after");
self.badge = new Element("div.badge").inject(App.block.notification, "top").hide();
});
window.addEvent("load", function() {
self.startInterval.delay($(window).getSize().x <= 480 ? 2e3 : 100, self);
});
self.show = show;
self.options = options;
self.data = data;
self.profile = self.show.profile;
self.el = new Element("div.item.episode").adopt(self.detail = new Element("div.item.data"));
self.create();
},
notify: function(result) {
create: function() {
var self = this;
var added = new Date();
added.setTime(result.added * 1e3);
result.el = App.getBlock("notification").addLink(new Element("span." + (result.read ? "read" : "")).adopt(new Element("span.message", {
html: result.message
}), new Element("span.added", {
text: added.timeDiffInWords(),
title: added
})), "top");
self.notifications.include(result);
if ((result.important !== undefined || result.sticky !== undefined) && !result.read) {
var sticky = true;
App.trigger("message", [ result.message, sticky, result ]);
} else if (!result.read) {
self.setBadge(self.notifications.filter(function(n) {
return !n.read;
}).length);
self.detail.set("id", "episode_" + self.data._id);
self.detail.adopt(new Element("span.episode", {
text: self.data.info.number || 0
}), new Element("span.name", {
text: self.getTitle()
}), new Element("span.firstaired", {
text: self.data.info.firstaired
}), self.quality = new Element("span.quality", {
events: {
click: function(e) {
var releases = self.detail.getElement(".item-actions .releases");
if (releases.isVisible()) releases.fireEvent("click", [ e ]);
}
},
setBadge: function(value) {
var self = this;
self.badge.set("text", value);
self.badge[value ? "show" : "hide"]();
},
markAsRead: function(force_ids) {
var self = this, ids = force_ids;
if (!force_ids) {
var rn = self.notifications.filter(function(n) {
return !n.read && n.important === undefined;
});
ids = [];
rn.each(function(n) {
ids.include(n._id);
});
}
if (ids.length > 0) Api.request("notification.markread", {
data: {
ids: ids.join(",")
},
onSuccess: function() {
self.setBadge("");
}), self.actions = new Element("div.item-actions"));
if (self.profile.data) {
self.profile.getTypes().each(function(type) {
var q = self.addQuality(type.get("quality"), type.get("3d"));
if ((type.finish == true || type.get("finish")) && !q.hasClass("finish")) {
q.addClass("finish");
q.set("title", q.get("title") + " Will finish searching for this movie if this quality is found.");
}
});
}
self.updateReleases();
Object.each(self.options.actions, function(action, key) {
self.action[key.toLowerCase()] = action = new self.options.actions[key](self);
if (action.el) self.actions.adopt(action);
});
},
startInterval: function(force) {
updateReleases: function() {
var self = this;
if (self.stopped && !force) {
self.stopped = false;
if (!self.data.releases || self.data.releases.length == 0) return;
self.data.releases.each(function(release) {
var q = self.quality.getElement(".q_" + release.quality + (release.is_3d ? ".is_3d" : ":not(.is_3d)")), status = release.status;
if (!q && (status == "snatched" || status == "seeding" || status == "done")) q = self.addQuality(release.quality, release.is_3d || false);
if (q && !q.hasClass(status)) {
q.addClass(status);
q.set("title", (q.get("title") ? q.get("title") : "") + " status: " + status);
}
});
},
addQuality: function(quality, is_3d) {
var self = this, q = Quality.getQuality(quality);
return new Element("span", {
text: q.label + (is_3d ? " 3D" : ""),
class: "q_" + q.identifier + (is_3d ? " is_3d" : ""),
title: ""
}).inject(self.quality);
},
getTitle: function() {
var self = this;
var title = "";
if (self.data.info.titles && self.data.info.titles.length > 0) {
title = self.data.info.titles[0];
} else {
title = "Episode " + self.data.info.number;
}
return title;
},
getIdentifier: function() {
var self = this;
try {
return self.get("identifiers").imdb;
} catch (e) {}
return self.get("imdb");
},
get: function(attr) {
return this.data[attr] || this.data.info[attr];
}
});
var ShowList = new Class({
Extends: MovieList,
media_type: "show",
list_key: "shows"
});
Page.Shows = new Class({
Extends: PageBase,
name: "shows",
icon: "show",
sub_pages: [ "Wanted" ],
default_page: "Wanted",
current_page: null,
initialize: function(parent, options) {
var self = this;
self.parent(parent, options);
self.navigation = new BlockNavigation();
$(self.navigation).inject(self.content, "top");
App.on("shows.enabled", self.toggleShows.bind(self));
},
defaultAction: function(action, params) {
var self = this;
if (self.current_page) {
self.current_page.hide();
if (self.current_page.list && self.current_page.list.navigation) self.current_page.list.navigation.dispose();
}
var route = new Route();
route.parse(action);
var page_name = route.getPage() != "index" ? route.getPage().capitalize() : self.default_page;
var page = self.sub_pages.filter(function(page) {
return page.name == page_name;
}).pick()["class"];
page.open(route.getAction() || "index", params);
page.show();
if (page.list && page.list.navigation) page.list.navigation.inject(self.navigation);
self.current_page = page;
self.navigation.activate(page_name.toLowerCase());
},
toggleShows: function(notification) {
document.body[notification.data === true ? "addClass" : "removeClass"]("show_support");
}
});
var BlockSearchShowItem = new Class({
Extends: BlockSearchMovieItem,
media_type: "movie"
});
var Season = new Class({
Extends: BlockBase,
action: {},
initialize: function(show, options, data) {
var self = this;
self.setOptions(options);
self.show = show;
self.options = options;
self.data = data;
self.profile = self.show.profile;
self.el = new Element("div.item.season").adopt(self.detail = new Element("div.item.data"));
self.create();
},
create: function() {
var self = this;
self.detail.set("id", "season_" + self.data._id);
self.detail.adopt(new Element("span.name", {
text: self.getTitle()
}), self.quality = new Element("span.quality", {
events: {
click: function(e) {
var releases = self.detail.getElement(".item-actions .releases");
if (releases.isVisible()) releases.fireEvent("click", [ e ]);
}
}
}), self.actions = new Element("div.item-actions"));
if (self.profile.data) {
self.profile.getTypes().each(function(type) {
var q = self.addQuality(type.get("quality"), type.get("3d"));
if ((type.finish == true || type.get("finish")) && !q.hasClass("finish")) {
q.addClass("finish");
q.set("title", q.get("title") + " Will finish searching for this movie if this quality is found.");
}
});
}
self.updateReleases();
Object.each(self.options.actions, function(action, key) {
self.action[key.toLowerCase()] = action = new self.options.actions[key](self);
if (action.el) self.actions.adopt(action);
});
},
updateReleases: function() {
var self = this;
if (!self.data.releases || self.data.releases.length == 0) return;
self.data.releases.each(function(release) {
var q = self.quality.getElement(".q_" + release.quality + (release.is_3d ? ".is_3d" : ":not(.is_3d)")), status = release.status;
if (!q && (status == "snatched" || status == "seeding" || status == "done")) q = self.addQuality(release.quality, release.is_3d || false);
if (q && !q.hasClass(status)) {
q.addClass(status);
q.set("title", (q.get("title") ? q.get("title") : "") + " status: " + status);
}
});
},
addQuality: function(quality, is_3d) {
var self = this, q = Quality.getQuality(quality);
return new Element("span", {
text: q.label + (is_3d ? " 3D" : ""),
class: "q_" + q.identifier + (is_3d ? " is_3d" : ""),
title: ""
}).inject(self.quality);
},
getTitle: function() {
var self = this;
var title = "";
if (self.data.info.number) {
title = "Season " + self.data.info.number;
} else {
title = "Specials";
}
return title;
},
getIdentifier: function() {
var self = this;
try {
return self.get("identifiers").imdb;
} catch (e) {}
return self.get("imdb");
},
get: function(attr) {
return this.data[attr] || this.data.info[attr];
}
});
var Episodes = new Class({
initialize: function(show, options) {
var self = this;
self.show = show;
self.options = options;
},
open: function() {
var self = this;
if (!self.container) {
self.container = new Element("div.options").grab(self.episodes_container = new Element("div.episodes.table"));
self.container.inject(self.show, "top");
Api.request("library.tree", {
data: {
media_id: self.show.data._id
},
onComplete: function(json) {
self.data = json.result;
self.createEpisodes();
}
});
}
self.show.slide("in", self.container, true);
},
createEpisodes: function() {
var self = this;
self.data.seasons.sort(self.sortSeasons);
self.data.seasons.each(function(season) {
self.createSeason(season);
season.episodes.sort(self.sortEpisodes);
season.episodes.each(function(episode) {
self.createEpisode(episode);
});
});
},
createSeason: function(season) {
var self = this, s = new Season(self.show, self.options, season);
$(s).inject(self.episodes_container);
},
createEpisode: function(episode) {
var self = this, e = new Episode(self.show, self.options, episode);
$(e).inject(self.episodes_container);
},
sortSeasons: function(a, b) {
if (!a.info.number) {
return 1;
}
if (!b.info.number) {
return -1;
}
if (a.info.number < b.info.number) return -1;
if (a.info.number > b.info.number) return 1;
return 0;
},
sortEpisodes: function(a, b) {
if (a.info.number < b.info.number) return -1;
if (a.info.number > b.info.number) return 1;
return 0;
}
});
var Show = new Class({
Extends: Movie
});
var ShowsWanted = new Class({
Extends: PageBase,
name: "wanted",
title: "List of TV Shows subscribed to",
folder_browser: null,
has_tab: false,
indexAction: function() {
var self = this;
if (!self.wanted) {
self.wanted = new ShowList({
identifier: "wanted",
status: "active",
type: "show",
actions: [ MA.IMDB, MA.Release, MA.Refresh, MA.Delete ],
add_new: true,
on_empty_element: App.createUserscriptButtons().addClass("empty_wanted")
});
$(self.wanted).inject(self.content);
}
}
});
var NotificationBase = new Class({
Extends: BlockBase,
Implements: [ Options, Events ],
initialize: function(options) {
var self = this;
self.setOptions(options);
App.addEvent("unload", self.stopPoll.bind(self));
App.addEvent("reload", self.startInterval.bind(self, [ true ]));
App.on("notification", self.notify.bind(self));
App.on("message", self.showMessage.bind(self));
App.addEvent("loadSettings", self.addTestButtons.bind(self));
self.notifications = [];
App.addEvent("load", function() {
App.block.notification = new BlockMenu(self, {
button_class: "icon-notifications",
class: "notification_menu",
onOpen: self.markAsRead.bind(self)
});
$(App.block.notification).inject(App.getBlock("search"), "after");
self.badge = new Element("div.badge").inject(App.block.notification, "top").hide();
});
window.addEvent("load", function() {
self.startInterval.delay($(window).getSize().x <= 480 ? 2e3 : 100, self);
});
},
notify: function(result) {
var self = this;
var added = new Date();
added.setTime(result.added * 1e3);
result.el = App.getBlock("notification").addLink(new Element("span." + (result.read ? "read" : "")).adopt(new Element("span.message", {
html: result.message
}), new Element("span.added", {
text: added.timeDiffInWords(),
title: added
})), "top");
self.notifications.include(result);
if ((result.important !== undefined || result.sticky !== undefined) && !result.read) {
var sticky = true;
App.trigger("message", [ result.message, sticky, result ]);
} else if (!result.read) {
self.setBadge(self.notifications.filter(function(n) {
return !n.read;
}).length);
}
},
setBadge: function(value) {
var self = this;
self.badge.set("text", value);
self.badge[value ? "show" : "hide"]();
},
markAsRead: function(force_ids) {
var self = this, ids = force_ids;
if (!force_ids) {
var rn = self.notifications.filter(function(n) {
return !n.read && n.important === undefined;
});
ids = [];
rn.each(function(n) {
ids.include(n._id);
});
}
if (ids.length > 0) Api.request("notification.markread", {
data: {
ids: ids.join(",")
},
onSuccess: function() {
self.setBadge("");
}
});
},
startInterval: function(force) {
var self = this;
if (self.stopped && !force) {
self.stopped = false;
return;
}
self.request = Api.request("notification.listener", {
@ -3425,185 +3912,6 @@ Profile.Type = new Class({
}
});
var QualityBase = new Class({
tab: "",
content: "",
setup: function(data) {
var self = this;
self.qualities = data.qualities;
self.profiles_list = null;
self.profiles = [];
Array.each(data.profiles, self.createProfilesClass.bind(self));
App.addEvent("loadSettings", self.addSettings.bind(self));
},
getProfile: function(id) {
return this.profiles.filter(function(profile) {
return profile.data._id == id;
}).pick();
},
getActiveProfiles: function() {
return Array.filter(this.profiles, function(profile) {
return !profile.data.hide;
});
},
getQuality: function(identifier) {
try {
return this.qualities.filter(function(q) {
return q.identifier == identifier;
}).pick();
} catch (e) {}
return {};
},
addSettings: function() {
var self = this;
self.settings = App.getPage("Settings");
self.settings.addEvent("create", function() {
var tab = self.settings.createSubTab("profile", {
label: "Quality",
name: "profile",
subtab_label: "Qualities"
}, self.settings.tabs.searcher, "searcher");
self.tab = tab.tab;
self.content = tab.content;
self.createProfiles();
self.createProfileOrdering();
self.createSizes();
});
},
createProfiles: function() {
var self = this;
var non_core_profiles = Array.filter(self.profiles, function(profile) {
return !profile.isCore();
});
var count = non_core_profiles.length;
self.settings.createGroup({
label: "Quality Profiles",
description: "Create your own profiles with multiple qualities."
}).inject(self.content).adopt(self.profile_container = new Element("div.container"), new Element("a.add_new_profile", {
text: count > 0 ? "Create another quality profile" : "Click here to create a quality profile.",
events: {
click: function() {
var profile = self.createProfilesClass();
$(profile).inject(self.profile_container);
}
}
}));
Array.each(non_core_profiles, function(profile) {
$(profile).inject(self.profile_container);
});
},
createProfilesClass: function(data) {
var self = this;
data = data || {
id: randomString()
};
var profile = new Profile(data);
self.profiles.include(profile);
return profile;
},
createProfileOrdering: function() {
var self = this;
self.settings.createGroup({
label: "Profile Defaults",
description: "(Needs refresh '" + (App.isMac() ? "CMD+R" : "F5") + "' after editing)"
}).grab(new Element(".ctrlHolder#profile_ordering").adopt(new Element("label[text=Order]"), self.profiles_list = new Element("ul"), new Element("p.formHint", {
html: "Change the order the profiles are in the dropdown list. Uncheck to hide it completely.<br />First one will be default."
}))).inject(self.content);
Array.each(self.profiles, function(profile) {
var check;
new Element("li", {
"data-id": profile.data._id
}).adopt(check = new Element("input[type=checkbox]", {
checked: !profile.data.hide,
events: {
change: self.saveProfileOrdering.bind(self)
}
}), new Element("span.profile_label", {
text: profile.data.label
}), new Element("span.handle.icon-handle")).inject(self.profiles_list);
});
var sorted_changed = false;
self.profile_sortable = new Sortables(self.profiles_list, {
revert: true,
handle: ".handle",
opacity: .5,
onSort: function() {
sorted_changed = true;
},
onComplete: function() {
if (sorted_changed) {
self.saveProfileOrdering();
sorted_changed = false;
}
}
});
},
saveProfileOrdering: function() {
var self = this, ids = [], hidden = [];
self.profiles_list.getElements("li").each(function(el, nr) {
ids.include(el.get("data-id"));
hidden[nr] = +!el.getElement("input[type=checkbox]").get("checked");
});
Api.request("profile.save_order", {
data: {
ids: ids,
hidden: hidden
}
});
},
createSizes: function() {
var self = this;
var group = self.settings.createGroup({
label: "Sizes",
description: "Edit the minimal and maximum sizes (in MB) for each quality.",
advanced: true,
name: "sizes"
}).inject(self.content);
new Element("div.item.head.ctrlHolder").adopt(new Element("span.label", {
text: "Quality"
}), new Element("span.min", {
text: "Min"
}), new Element("span.max", {
text: "Max"
})).inject(group);
Array.each(self.qualities, function(quality) {
new Element("div.ctrlHolder.item").adopt(new Element("span.label", {
text: quality.label
}), new Element("input.min[type=text]", {
value: quality.size_min,
events: {
keyup: function(e) {
self.changeSize(quality.identifier, "size_min", e.target.get("value"));
}
}
}), new Element("input.max[type=text]", {
value: quality.size_max,
events: {
keyup: function(e) {
self.changeSize(quality.identifier, "size_max", e.target.get("value"));
}
}
})).inject(group);
});
},
size_timer: {},
changeSize: function(identifier, type, value) {
var self = this;
if (self.size_timer[identifier + type]) clearTimeout(self.size_timer[identifier + type]);
self.size_timer[identifier + type] = function() {
Api.request("quality.size.save", {
data: {
identifier: identifier,
value_type: type,
value: value
}
});
}.delay(300);
}
});
window.Quality = new QualityBase();
Page.Userscript = new Class({
Extends: PageBase,
order: 80,

8
couchpotato/static/scripts/couchpotato.js

@ -164,6 +164,14 @@
self.block.more.addLink(a);
});
// Add support classes
var support_classes = [];
self.options.support.each(function(support){
if(support){
support_classes.push('support_'+support);
}
});
document.body.addClass(support_classes.join(' '));
new ScrollSpy({
min: 10,

2
couchpotato/static/style/combined.min.css

@ -392,6 +392,8 @@
.toggle_menu h2{height:40px}
@media all and (max-width:480px){.toggle_menu h2{font-size:16px;text-align:center;height:30px}
}
.header .navigation ul .tab_shows{display:none}
.support_show .header .navigation ul .tab_shows{display:block}
.add_new_category{padding:20px;display:block;text-align:center;font-size:20px}
.category{margin-bottom:20px;position:relative}
.category>.delete{position:absolute;padding:16px;right:0;cursor:pointer;opacity:.6;color:#fd5353}

5
couchpotato/templates/index.html

@ -67,7 +67,7 @@
Quality.setup({
'profiles': {{ json_encode(fireEvent('profile.all', single = True)) }},
'qualities': {{ json_encode(fireEvent('quality.all', single = True)) }}
'qualities': {{ json_encode(fireEvent('quality.all', merge = True)) }}
});
CategoryList.setup({{ json_encode(fireEvent('category.all', single = True)) }});
@ -81,7 +81,8 @@
'app_dir': {{ json_encode(Env.get('app_dir', unicode = True)) }},
'data_dir': {{ json_encode(Env.get('data_dir', unicode = True)) }},
'pid': {{ json_encode(Env.getPid()) }},
'userscript_version': {{ json_encode(fireEvent('userscript.get_version', single = True)) }}
'userscript_version': {{ json_encode(fireEvent('userscript.get_version', single = True)) }},
'support': {{ json_encode(['show' if Env.setting('enabled', 'shows') else '', 'movie']) }}
});
})

42
libs/qcond/__init__.py

@ -0,0 +1,42 @@
# Copyright 2013 Dean Gardiner <gardiner91@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from qcond.transformers.merge import MergeTransformer
from qcond.transformers.slice import SliceTransformer
from qcond.transformers.strip_common import StripCommonTransformer
__version_info__ = ('0', '1', '0')
__version_branch__ = 'master'
__version__ = "%s%s" % (
'.'.join(__version_info__),
'-' + __version_branch__ if __version_branch__ else ''
)
class QueryCondenser(object):
def __init__(self):
self.transformers = [
MergeTransformer(),
SliceTransformer(),
StripCommonTransformer()
]
def distinct(self, titles):
for transformer in self.transformers:
titles = transformer.run(titles)
return titles

23
libs/qcond/compat.py

@ -0,0 +1,23 @@
# Copyright 2013 Dean Gardiner <gardiner91@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
PY3 = sys.version_info[0] == 3
if PY3:
xrange = range
else:
xrange = xrange

84
libs/qcond/helpers.py

@ -0,0 +1,84 @@
# Copyright 2013 Dean Gardiner <gardiner91@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from difflib import SequenceMatcher
import re
import sys
from logr import Logr
from qcond.compat import xrange
PY3 = sys.version_info[0] == 3
def simplify(s):
s = s.lower()
s = re.sub(r"(\w)'(\w)", r"\1\2", s)
return s
def strip(s):
return re.sub(r"^(\W*)(.*?)(\W*)$", r"\2", s)
def create_matcher(a, b, swap_longest = True, case_sensitive = False):
# Ensure longest string is a
if swap_longest and len(b) > len(a):
a_ = a
a = b
b = a_
if not case_sensitive:
a = a.upper()
b = b.upper()
return SequenceMatcher(None, a, b)
def first(function_or_none, sequence):
if PY3:
for item in filter(function_or_none, sequence):
return item
else:
result = filter(function_or_none, sequence)
if len(result):
return result[0]
return None
def sorted_append(sequence, item, func):
if not len(sequence):
sequence.insert(0, item)
return
x = 0
for x in xrange(len(sequence)):
if func(sequence[x]):
sequence.insert(x, item)
return
sequence.append(item)
def itemsMatch(L1, L2):
return len(L1) == len(L2) and sorted(L1) == sorted(L2)
def distinct(sequence):
result = []
for item in sequence:
if item not in result:
result.append(item)
return result

0
libs/qcond/transformers/__init__.py

21
libs/qcond/transformers/base.py

@ -0,0 +1,21 @@
# Copyright 2013 Dean Gardiner <gardiner91@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class Transformer(object):
def __init__(self):
pass
def run(self, titles):
raise NotImplementedError()

241
libs/qcond/transformers/merge.py

@ -0,0 +1,241 @@
# Copyright 2013 Dean Gardiner <gardiner91@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from operator import itemgetter
from logr import Logr
from qcond.helpers import simplify, strip, first, sorted_append, distinct
from qcond.transformers.base import Transformer
from qcond.compat import xrange
class MergeTransformer(Transformer):
def __init__(self):
super(MergeTransformer, self).__init__()
def run(self, titles):
titles = distinct([simplify(title) for title in titles])
Logr.info(str(titles))
Logr.debug("------------------------------------------------------------")
root, tails = self.parse(titles)
Logr.debug("--------------------------PARSE-----------------------------")
for node in root:
print_tree(node)
Logr.debug("--------------------------MERGE-----------------------------")
self.merge(root)
Logr.debug("--------------------------FINAL-----------------------------")
for node in root:
print_tree(node)
Logr.debug("--------------------------RESULT-----------------------------")
scores = {}
results = []
for tail in tails:
score, value, original_value = tail.full_value()
if value in scores:
scores[value] += score
else:
results.append((value, original_value))
scores[value] = score
Logr.debug("%s %s %s", score, value, original_value)
sorted_results = sorted(results, key=lambda item: (scores[item[0]], item[1]), reverse = True)
return [result[0] for result in sorted_results]
def parse(self, titles):
root = []
tails = []
for title in titles:
Logr.debug(title)
cur = None
words = title.split(' ')
for wx in xrange(len(words)):
word = strip(words[wx])
if cur is None:
cur = find_node(root, word)
if cur is None:
cur = DNode(word, None, num_children=len(words) - wx, original_value=title)
root.append(cur)
else:
parent = cur
parent.weight += 1
cur = find_node(parent.right, word)
if cur is None:
Logr.debug("%s %d", word, len(words) - wx)
cur = DNode(word, parent, num_children=len(words) - wx)
sorted_append(parent.right, cur, lambda a: a.num_children < cur.num_children)
else:
cur.weight += 1
tails.append(cur)
return root, tails
def merge(self, root):
for x in range(len(root)):
Logr.debug(root[x])
root[x].right = self._merge(root[x].right)
Logr.debug('=================================================================')
return root
def get_nodes_right(self, value):
if type(value) is not list:
value = [value]
nodes = []
for node in value:
nodes.append(node)
for child in self.get_nodes_right(node.right):
nodes.append(child)
return nodes
def destroy_nodes_right(self, value):
nodes = self.get_nodes_right(value)
for node in nodes:
node.value = None
node.dead = True
def _merge(self, nodes, depth = 0):
Logr.debug(str('\t' * depth) + str(nodes))
if not len(nodes):
return []
top = nodes[0]
# Merge into top
for x in range(len(nodes)):
# Merge extra results into top
if x > 0:
top.value = None
top.weight += nodes[x].weight
self.destroy_nodes_right(top.right)
if len(nodes[x].right):
top.join_right(nodes[x].right)
Logr.debug("= %s joined %s", nodes[x], top)
nodes[x].dead = True
nodes = [n for n in nodes if not n.dead]
# Traverse further
for node in nodes:
if len(node.right):
node.right = self._merge(node.right, depth + 1)
return nodes
def print_tree(node, depth = 0):
Logr.debug(str('\t' * depth) + str(node))
if len(node.right):
for child in node.right:
print_tree(child, depth + 1)
else:
Logr.debug(node.full_value()[1])
def find_node(node_list, value):
# Try find adjacent node match
for node in node_list:
if node.value == value:
return node
return None
class DNode(object):
def __init__(self, value, parent, right=None, weight=1, num_children=None, original_value=None):
self.value = value
self.parent = parent
if right is None:
right = []
self.right = right
self.weight = weight
self.original_value = original_value
self.num_children = num_children
self.dead = False
def join_right(self, nodes):
for node in nodes:
duplicate = first(lambda x: x.value == node.value, self.right)
if duplicate:
duplicate.weight += node.weight
duplicate.join_right(node.right)
else:
node.parent = self
self.right.append(node)
def full_value(self):
words = []
total_score = 0
cur = self
root = None
while cur is not None:
if cur.value and not cur.dead:
words.insert(0, cur.value)
total_score += cur.weight
if cur.parent is None:
root = cur
cur = cur.parent
return float(total_score) / len(words), ' '.join(words), root.original_value if root else None
def __repr__(self):
return '<%s value:"%s", weight: %s, num_children: %s%s%s>' % (
'DNode',
self.value,
self.weight,
self.num_children,
(', original_value: %s' % self.original_value) if self.original_value else '',
' REMOVING' if self.dead else ''
)

280
libs/qcond/transformers/slice.py

@ -0,0 +1,280 @@
# Copyright 2013 Dean Gardiner <gardiner91@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from logr import Logr
from qcond.helpers import create_matcher
from qcond.transformers.base import Transformer
class SliceTransformer(Transformer):
def __init__(self):
super(SliceTransformer, self).__init__()
def run(self, titles):
nodes = []
# Create a node for each title
for title in titles:
nodes.append(SimNode(title))
# Calculate similarities between nodes
for node in nodes:
calculate_sim_links(node, [n for n in nodes if n != node])
kill_nodes_above(nodes, 0.90)
Logr.debug('---------------------------------------------------------------------')
print_link_tree(nodes)
Logr.debug('%s %s', len(nodes), [n.value for n in nodes])
Logr.debug('---------------------------------------------------------------------')
kill_trailing_nodes(nodes)
Logr.debug('---------------------------------------------------------------------')
# Sort remaining nodes by 'num_merges'
nodes = sorted(nodes, key=lambda n: n.num_merges, reverse=True)
print_link_tree(nodes)
Logr.debug('---------------------------------------------------------------------')
Logr.debug('%s %s', len(nodes), [n.value for n in nodes])
return [n.value for n in nodes]
class SimLink(object):
def __init__(self, similarity, opcodes, stats):
self.similarity = similarity
self.opcodes = opcodes
self.stats = stats
class SimNode(object):
def __init__(self, value):
self.value = value
self.dead = False
self.num_merges = 0
self.links = {} # {<other SimNode>: <SimLink>}
def kill_nodes(nodes, killed_nodes):
# Remove killed nodes from root list
for node in killed_nodes:
if node in nodes:
nodes.remove(node)
# Remove killed nodes from links
for killed_node in killed_nodes:
for node in nodes:
if killed_node in node.links:
node.links.pop(killed_node)
def kill_nodes_above(nodes, above_sim):
killed_nodes = []
for node in nodes:
if node.dead:
continue
Logr.debug(node.value)
for link_node, link in node.links.items():
if link_node.dead:
continue
Logr.debug('\t%0.2f -- %s', link.similarity, link_node.value)
if link.similarity >= above_sim:
if len(link_node.value) > len(node.value):
Logr.debug('\t\tvery similar, killed this node')
link_node.dead = True
node.num_merges += 1
killed_nodes.append(link_node)
else:
Logr.debug('\t\tvery similar, killed owner')
node.dead = True
link_node.num_merges += 1
killed_nodes.append(node)
kill_nodes(nodes, killed_nodes)
def print_link_tree(nodes):
for node in nodes:
Logr.debug(node.value)
Logr.debug('\tnum_merges: %s', node.num_merges)
if len(node.links):
Logr.debug('\t========== LINKS ==========')
for link_node, link in node.links.items():
Logr.debug('\t%0.2f -- %s', link.similarity, link_node.value)
Logr.debug('\t---------------------------')
def kill_trailing_nodes(nodes):
killed_nodes = []
for node in nodes:
if node.dead:
continue
Logr.debug(node.value)
for link_node, link in node.links.items():
if link_node.dead:
continue
is_valid = link.stats.get('valid', False)
has_deletions = False
has_insertions = False
has_replacements = False
for opcode in link.opcodes:
if opcode[0] == 'delete':
has_deletions = True
if opcode[0] == 'insert':
has_insertions = True
if opcode[0] == 'replace':
has_replacements = True
equal_perc = link.stats.get('equal', 0) / float(len(node.value))
insert_perc = link.stats.get('insert', 0) / float(len(node.value))
Logr.debug('\t({0:<24}) [{1:02d}:{2:02d} = {3:02d} {4:3.0f}% {5:3.0f}%] -- {6:<45}'.format(
'd:%s, i:%s, r:%s' % (has_deletions, has_insertions, has_replacements),
len(node.value), len(link_node.value), link.stats.get('equal', 0),
equal_perc * 100, insert_perc * 100,
'"{0}"'.format(link_node.value)
))
Logr.debug('\t\t%s', link.stats)
kill = all([
is_valid,
equal_perc >= 0.5,
insert_perc < 2,
has_insertions,
not has_deletions,
not has_replacements
])
if kill:
Logr.debug('\t\tkilled this node')
link_node.dead = True
node.num_merges += 1
killed_nodes.append(link_node)
kill_nodes(nodes, killed_nodes)
stats_print_format = "\t{0:<8} ({1:2d}:{2:2d}) ({3:2d}:{4:2d})"
def get_index_values(iterable, a, b):
return (
iterable[a] if a else None,
iterable[b] if b else None
)
def get_indices(iterable, a, b):
return (
a if 0 < a < len(iterable) else None,
b if 0 < b < len(iterable) else None
)
def get_opcode_stats(for_node, node, opcodes):
stats = {}
for tag, i1, i2, j1, j2 in opcodes:
Logr.debug(stats_print_format.format(
tag, i1, i2, j1, j2
))
if tag in ['insert', 'delete']:
ax = None, None
bx = None, None
if tag == 'insert':
ax = get_indices(for_node.value, i1 - 1, i1)
bx = get_indices(node.value, j1, j2 - 1)
if tag == 'delete':
ax = get_indices(for_node.value, j1 - 1, j1)
bx = get_indices(node.value, i1, i2 - 1)
av = get_index_values(for_node.value, *ax)
bv = get_index_values(node.value, *bx)
Logr.debug(
'\t\t%s %s [%s><%s] <---> %s %s [%s><%s]',
ax, av, av[0], av[1],
bx, bv, bv[0], bv[1]
)
head_valid = av[0] in [None, ' '] or bv[0] in [None, ' ']
tail_valid = av[1] in [None, ' '] or bv[1] in [None, ' ']
valid = head_valid and tail_valid
if 'valid' not in stats or (stats['valid'] and not valid):
stats['valid'] = valid
Logr.debug('\t\t' + ('VALID' if valid else 'INVALID'))
if tag not in stats:
stats[tag] = 0
stats[tag] += (i2 - i1) or (j2 - j1)
return stats
def calculate_sim_links(for_node, other_nodes):
for node in other_nodes:
if node in for_node.links:
continue
Logr.debug('calculating similarity between "%s" and "%s"', for_node.value, node.value)
# Get similarity
similarity_matcher = create_matcher(for_node.value, node.value)
similarity = similarity_matcher.quick_ratio()
# Get for_node -> node opcodes
a_opcodes_matcher = create_matcher(for_node.value, node.value, swap_longest = False)
a_opcodes = a_opcodes_matcher.get_opcodes()
a_stats = get_opcode_stats(for_node, node, a_opcodes)
Logr.debug('-' * 100)
# Get node -> for_node opcodes
b_opcodes_matcher = create_matcher(node.value, for_node.value, swap_longest = False)
b_opcodes = b_opcodes_matcher.get_opcodes()
b_stats = get_opcode_stats(for_node, node, b_opcodes)
for_node.links[node] = SimLink(similarity, a_opcodes, a_stats)
node.links[for_node] = SimLink(similarity, b_opcodes, b_stats)
#raw_input('Press ENTER to continue')

26
libs/qcond/transformers/strip_common.py

@ -0,0 +1,26 @@
# Copyright 2013 Dean Gardiner <gardiner91@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from qcond.transformers.base import Transformer
COMMON_WORDS = [
'the'
]
class StripCommonTransformer(Transformer):
def run(self, titles):
return [title for title in titles if title.lower() not in COMMON_WORDS]

4
libs/tvdb_api/.gitignore

@ -0,0 +1,4 @@
.DS_Store
*.pyc
*.egg-info/*
dist/*.tar.gz

9
libs/tvdb_api/.travis.yml

@ -0,0 +1,9 @@
language: python
python:
- 2.5
- 2.6
- 2.7
install: pip install nose
script: nosetests

4
libs/tvdb_api/MANIFEST.in

@ -0,0 +1,4 @@
include UNLICENSE
include readme.md
include tests/*.py
include Rakefile

103
libs/tvdb_api/Rakefile

@ -0,0 +1,103 @@
require 'fileutils'
task :default => [:clean]
task :clean do
[".", "tests"].each do |cd|
puts "Cleaning directory #{cd}"
Dir.new(cd).each do |t|
if t =~ /.*\.pyc$/
puts "Removing #{File.join(cd, t)}"
File.delete(File.join(cd, t))
end
end
end
end
desc "Upversion files"
task :upversion do
puts "Upversioning"
Dir.glob("*.py").each do |filename|
f = File.new(filename, File::RDWR)
contents = f.read()
contents.gsub!(/__version__ = ".+?"/){|m|
cur_version = m.scan(/\d+\.\d+/)[0].to_f
new_version = cur_version + 0.1
puts "Current version: #{cur_version}"
puts "New version: #{new_version}"
new_line = "__version__ = \"#{new_version}\""
puts "Old line: #{m}"
puts "New line: #{new_line}"
m = new_line
}
puts contents[0]
f.truncate(0) # empty the existing file
f.seek(0)
f.write(contents.to_s) # write modified file
f.close()
end
end
desc "Upload current version to PyPi"
task :topypi => :test do
cur_file = File.open("tvdb_api.py").read()
tvdb_api_version = cur_file.scan(/__version__ = "(.*)"/)
tvdb_api_version = tvdb_api_version[0][0].to_f
puts "Build sdist and send tvdb_api v#{tvdb_api_version} to PyPi?"
if $stdin.gets.chomp == "y"
puts "Sending source-dist (sdist) to PyPi"
if system("python setup.py sdist register upload")
puts "tvdb_api uploaded!"
end
else
puts "Cancelled"
end
end
desc "Profile by running unittests"
task :profile do
cd "tests"
puts "Profiling.."
`python -m cProfile -o prof_runtest.prof runtests.py`
puts "Converting prof to dot"
`python gprof2dot.py -o prof_runtest.dot -f pstats prof_runtest.prof`
puts "Generating graph"
`~/Applications/dev/graphviz.app/Contents/macOS/dot -Tpng -o profile.png prof_runtest.dot -Gbgcolor=black`
puts "Cleanup"
rm "prof_runtest.dot"
rm "prof_runtest.prof"
end
task :test do
puts "Nosetest'ing"
if not system("nosetests -v --with-doctest")
raise "Test failed!"
end
puts "Doctesting *.py (excluding setup.py)"
Dir.glob("*.py").select{|e| ! e.match(/setup.py/)}.each do |filename|
if filename =~ /^setup\.py/
skip
end
puts "Doctesting #{filename}"
if not system("python", "-m", "doctest", filename)
raise "Failed doctest"
end
end
puts "Doctesting readme.md"
if not system("python", "-m", "doctest", "readme.md")
raise "Doctest"
end
end

26
libs/tvdb_api/UNLICENSE

@ -0,0 +1,26 @@
Copyright 2011-2012 Ben Dickson (dbr)
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org/>

0
libs/tvdb_api/__init__.py

109
libs/tvdb_api/readme.md

@ -0,0 +1,109 @@
# `tvdb_api`
`tvdb_api` is an easy to use interface to [thetvdb.com][tvdb]
`tvnamer` has moved to a separate repository: [github.com/dbr/tvnamer][tvnamer] - it is a utility which uses `tvdb_api` to rename files from `some.show.s01e03.blah.abc.avi` to `Some Show - [01x03] - The Episode Name.avi` (which works by getting the episode name from `tvdb_api`)
[![Build Status](https://secure.travis-ci.org/dbr/tvdb_api.png?branch=master)](http://travis-ci.org/dbr/tvdb_api)
## To install
You can easily install `tvdb_api` via `easy_install`
easy_install tvdb_api
You may need to use sudo, depending on your setup:
sudo easy_install tvdb_api
The [`tvnamer`][tvnamer] command-line tool can also be installed via `easy_install`, this installs `tvdb_api` as a dependancy:
easy_install tvnamer
## Basic usage
import tvdb_api
t = tvdb_api.Tvdb()
episode = t['My Name Is Earl'][1][3] # get season 1, episode 3 of show
print episode['episodename'] # Print episode name
## Advanced usage
Most of the documentation is in docstrings. The examples are tested (using doctest) so will always be up to date and working.
The docstring for `Tvdb.__init__` lists all initialisation arguments, including support for non-English searches, custom "Select Series" interfaces and enabling the retrieval of banners and extended actor information. You can also override the default API key using `apikey`, recommended if you're using `tvdb_api` in a larger script or application
### Exceptions
There are several exceptions you may catch, these can be imported from `tvdb_api`:
- `tvdb_error` - this is raised when there is an error communicating with [thetvdb.com][tvdb] (a network error most commonly)
- `tvdb_userabort` - raised when a user aborts the Select Series dialog (by `ctrl+c`, or entering `q`)
- `tvdb_shownotfound` - raised when `t['show name']` cannot find anything
- `tvdb_seasonnotfound` - raised when the requested series (`t['show name][99]`) does not exist
- `tvdb_episodenotfound` - raised when the requested episode (`t['show name][1][99]`) does not exist.
- `tvdb_attributenotfound` - raised when the requested attribute is not found (`t['show name']['an attribute']`, `t['show name'][1]['an attribute']`, or ``t['show name'][1][1]['an attribute']``)
### Series data
All data exposed by [thetvdb.com][tvdb] is accessible via the `Show` class. A Show is retrieved by doing..
>>> import tvdb_api
>>> t = tvdb_api.Tvdb()
>>> show = t['scrubs']
>>> type(show)
<class 'tvdb_api.Show'>
For example, to find out what network Scrubs is aired:
>>> t['scrubs']['network']
u'ABC'
The data is stored in an attribute named `data`, within the Show instance:
>>> t['scrubs'].data.keys()
['networkid', 'rating', 'airs_dayofweek', 'contentrating', 'seriesname', 'id', 'airs_time', 'network', 'fanart', 'lastupdated', 'actors', 'ratingcount', 'status', 'added', 'poster', 'imdb_id', 'genre', 'banner', 'seriesid', 'language', 'zap2it_id', 'addedby', 'firstaired', 'runtime', 'overview']
Although each element is also accessible via `t['scrubs']` for ease-of-use:
>>> t['scrubs']['rating']
u'9.0'
This is the recommended way of retrieving "one-off" data (for example, if you are only interested in "seriesname"). If you wish to iterate over all data, or check if a particular show has a specific piece of data, use the `data` attribute,
>>> 'rating' in t['scrubs'].data
True
### Banners and actors
Since banners and actors are separate XML files, retrieving them by default is undesirable. If you wish to retrieve banners (and other fanart), use the `banners` Tvdb initialisation argument:
>>> from tvdb_api import Tvdb
>>> t = Tvdb(banners = True)
Then access the data using a `Show`'s `_banner` key:
>>> t['scrubs']['_banners'].keys()
['fanart', 'poster', 'series', 'season']
The banner data structure will be improved in future versions.
Extended actor data is accessible similarly:
>>> t = Tvdb(actors = True)
>>> actors = t['scrubs']['_actors']
>>> actors[0]
<Actor "Zach Braff">
>>> actors[0].keys()
['sortorder', 'image', 'role', 'id', 'name']
>>> actors[0]['role']
u'Dr. John Michael "J.D." Dorian'
Remember a simple list of actors is accessible via the default Show data:
>>> t['scrubs']['actors']
u'|Zach Braff|Donald Faison|Sarah Chalke|Christa Miller|Aloma Wright|Robert Maschio|Sam Lloyd|Neil Flynn|Ken Jenkins|Judy Reyes|John C. McGinley|Travis Schuldt|Johnny Kastl|Heather Graham|Michael Mosley|Kerry Bish\xe9|Dave Franco|Eliza Coupe|'
[tvdb]: http://thetvdb.com
[tvnamer]: http://github.com/dbr/tvnamer

35
libs/tvdb_api/setup.py

@ -0,0 +1,35 @@
from setuptools import setup
setup(
name = 'tvdb_api',
version='1.8.2',
author='dbr/Ben',
description='Interface to thetvdb.com',
url='http://github.com/dbr/tvdb_api/tree/master',
license='unlicense',
long_description="""\
An easy to use API interface to TheTVDB.com
Basic usage is:
>>> import tvdb_api
>>> t = tvdb_api.Tvdb()
>>> ep = t['My Name Is Earl'][1][22]
>>> ep
<Episode 01x22 - Stole a Badge>
>>> ep['episodename']
u'Stole a Badge'
""",
py_modules = ['tvdb_api', 'tvdb_ui', 'tvdb_exceptions', 'tvdb_cache'],
classifiers=[
"Intended Audience :: Developers",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Topic :: Multimedia",
"Topic :: Utilities",
"Topic :: Software Development :: Libraries :: Python Modules",
]
)

Some files were not shown because too many files changed in this diff

Loading…
Cancel
Save