2016-09-17 49 views
0

我有一個任務在我的本地服務器上正常工作,但當我推到Heroku時,沒有任何反應。沒有錯誤消息。我是新手,當談到這一點,並在當地我會做我的芹菜redis任務不工作在我的Django應用程序在heroku服務器

celery worker -A blog -l info. 

所以我猜這是個問題可能擁有這樣做的啓動工作。因爲我不知道要這樣做。我懷疑我應該在我的應用程序中這樣做。我的繼承人代碼

celery.py

import os 

from celery import Celery 

from django.conf import settings 

# set the default Django settings module for the 'celery' program. 
os.environ.setdefault(
    'DJANGO_SETTINGS_MODULE', 'gettingstarted.settings' 
) 

app = Celery('blog') 

# Using a string here means the worker will not have to 
# pickle the object when using Windows. 
app.config_from_object('django.conf:settings') 
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) 

我tasks.py

import requests 
import random 
import os 

from bs4 import BeautifulSoup 
from .celery import app 
from .models import Post 
from django.contrib.auth.models import User 


@app.task 
def the_star(): 
    def swappo(): 
     user_one = ' "Mozilla/5.0 (Windows NT 6.0; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0" ' 
     user_two = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5)" ' 
     user_thr = ' "Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko" ' 
     user_for = ' "Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:10.0) Gecko/20100101 Firefox/10.0" ' 

     agent_list = [user_one, user_two, user_thr, user_for] 
     a = random.choice(agent_list) 
     return a 

    headers = { 
     "user-agent": swappo(), 
     "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", 
     "accept-charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3", 
     "accept-encoding": "gzip,deflate,sdch", 
     "accept-language": "en-US,en;q=0.8", 
    } 

    # scraping from worldstar 
    url_to = 'http://www.worldstarhiphop.com' 
    html = requests.get(url_to, headers=headers) 
    soup = BeautifulSoup(html.text, 'html5lib') 
    titles = soup.find_all('section', 'box') 
    name = 'World Star' 

    if os.getenv('_system_name') == 'OSX': 
     author = User.objects.get(id=2) 
    else: 
     author = User.objects.get(id=3) 

    def make_soup(url): 
     the_comments_page = requests.get(url, headers=headers) 
     soupdata = BeautifulSoup(the_comments_page.text, 'html5lib') 
     comment = soupdata.find('div') 
     para = comment.find_all('p') 
     kids = [child.text for child in para] 
     blu = str(kids).strip('[]') 
     return blu 

    cleaned_titles = [title for title in titles if title.a.get('href') != 'vsubmit.php'] 
    world_entries = [{'href': url_to + box.a.get('href'), 
         'src': box.img.get('src'), 
         'text': box.strong.a.text, 
         'comments': make_soup(url_to + box.a.get('href')), 
         'name': name, 
         'url': url_to + box.a.get('href'), 
         'embed': None, 
         'author': None, 
         'video': False 
         } for box in cleaned_titles][:10] # The count 

    for entry in world_entries: 
     post = Post() 
     post.title = entry['text'] 
     title = post.title 
     if not Post.objects.filter(title=title): 
      post.title = entry['text'] 
      post.name = entry['name'] 
      post.url = entry['url'] 
      post.body = entry['comments'] 
      post.image_url = entry['src'] 
      post.video_path = entry['embed'] 
      post.author = entry['author'] 
      post.video = entry['video'] 
      post.status = 'draft' 
      post.save() 
      post.tags.add("video", "Musica") 
    return world_entries 

我views.py

def shopan(request): 
    the_star.delay() 
    return redirect('/') 

我有REDIS_URL

SOI多個實例跑

heroku redis:promote REDIS_URL 

這就是我在我的環境變量中使用的,你可以在上面看到。我怎樣才能做這項工作?

回答

1

您需要添加在你Procfile一個條目告訴Heroku的啓動芹菜工人:

worker:celery worker -A blog -l info 
+0

我只是做了你所說的話,並推到我的服務器,但什麼都沒有發生。我回到文檔和搜索procfile,它說heroku ps:scale worker = 1,我試圖做到這一點,它說,無法找到該編隊 – losee

+0

我不得不把它分開,所以結腸和像這個工人芹菜:芹菜工人 - 一個博客-1信息,然後我不得不運行heroku ps:規模工人= 1和工作 – losee

相關問題