Reports6i: Server job queue API package
Hi,
Has anyone tried editing the reports server job queue API (rw_server.clean_up_queue)? I don't want the queue table (RW_SERVER_QUEUE) to be truncated each time the server is shutdown and re-started.
(By the way, it's mentioned in the Publishing Reports manual that we can edit the API package to override the default behaviour)
I did the following:
1. I commented the truncation commands in the rw_server.clean_up_queue function
2. Stop and re-start the server.
3. The reports server inserts duplicate records for scheduled jobs.
Apparently, Oracle Reports server component stores job queue information somewhere on the hard disk(c:\orant\reports60\server) and re-inserts that info. to the database table on queue startup without checking for duplicate job_ids.
I would appreciate if someone could try this and confirm it. Is this a bug?
Thanks
Manish
You can trace the report on the reports server. This link talks about tracing from reports builder but also has links and examples of how to set up tracing on the reports server.
Similar Messages
-
Report Server Job Queue Information
Hi,
Is there any Oracle Table that stores records for each report generated from report server(Containing Report Server Job Queue Information) ?
Thanks in Advance.
Regards
Sameerhello,
table RW_SERVER_QUEUE will give you all the info you will ever need about your print jobs. You do need to install a package and then activate the logging.
Very useful when you need to pinpoint why a report is not working. -
Store Report server Job Queue to Database
Hi,
Is it possible to store the reports server job queue to the database with a version of Reports prior to 6i?
ManishYes. If you have installed Reports 6.0 and Reports 6.0 patch 1 or later, you shold find a script %ORACLE_HOME%\report60\sql\rw_server.sql - run this (eg as scott/tiger or some other user), and then in the <servername.ora> file add the parameter repositoryconn="scott/tiger@mydb". Then restart the server, and you should find the queue information being pushed into the DB. Note if the API pushes too much information, or you generally want to customise it then you may - just preserve the function/parameter prototypes.
Regards
The Oracle Reports Team http://technet.oracle.com -
How to find out the user from the Jobs queue in Report server
Hello All!
I have a doubt about finding out the user from the scheduled jobs queue. Say I go ahead and schedule a report on Reports Server how can I find out the user name. When I view the jobs using showjobs I could see that the DBMS_JOBS table has a column under "Job Owner". But it invariantly shows it is "rwuser". So is there a way to find out which user has scheduled which job?
Regards
Shobhahi,
The below tables will give only the name .
USER_ADDRS
USER_ADDR
USER_ADDRP
USR02
i think you need email address .
you can use this Tcode : su01d
and give the user name and excute it
i hope it will help you.
Ram
Edited by: Ram velanati on Jun 30, 2008 6:57 PM -
Can I create a table in a procedure submitted to job queue?
I have created a package (with AUTHID CURRENT_USER) where some of the procedures create temporary tables to facilitate processing. These procedures run just fine when executed directly from within an anonymous block at the SQL*PLUS level. However, when I submit the procedures to the job queue (via DBMS_JOB.SUBMIT), the job is submitted successfully but fails when run. Investigating the Alert Log shows an error of insufficient privilege on the CREATE TABLE command in the procedure.
QUESTION:
Can I create a table from a procedure running in the Job Queue? If so, then how to get it to work? Does the job run in a different environment that needs Create Table privileges set to my schema?
Thanks for any info you can provide.
JohnFYI: Found the problem. In the Administrator's Guide (of course not in the supplied packages documentation about DBMS_JOB) I found:
"How Jobs Execute
SNP background processes execute jobs. To execute a job, the process creates a session to run the job.
When an SNP process runs a job, the job is run in the same environment in which it was submitted and with the owner's default privileges.
When you force a job to run using the procedure DBMS_JOB.RUN, the job is run by your user process. When your user process runs a job, it is run with your default privileges only. Privileges granted to you through roles are unavailable."
And of course we had set up our users to get all privileges through Roles, so CREATE TABLE wasn't one of my DEFAULT PRIVILEGES!
It sure would be nice if Oracle documentation could get its act together and provide ALL information about a topic in a logical place. The effort to find the information about privileges occurred after it took me 1/2 hour to figure out why my submissions were failing - I didn't have the ';' included in the quoted string for the procedure to be called - which I only figured out after looking at the code for DBMS_JOB where it made the note to make sure you include the ';'. Wouldn't it be good to have that MINOR DETAIL mentioned in the description of DBMS_JOB.SUBMIT????? -
What is the commands for SQL server job to ftp file to remote server?
I created the job to bcp data out and create file on file system. after that, I need to ftp the file over to another box. how can I do it from sql server job?
JulieShopI would like to suggest a SSIS package with a
FTP Task instead.
Olaf Helper
[ Blog] [ Xing] [ MVP] -
Select Statement Takes 5 Days in Job Queue Manager
Here's our problem...
In one of our packages, there is a D-SQL select statement. Every time we schedule to run that package using job queque manager, it gets stuck. The SQL Tracing shows it's scanning the blocks, but it just won't finish. If we run the same sql statement (generated from the output) at the same time, it runs the result in a fly. Now if the package is run through sql+, it runs through just fine. So there must be a bug with running it in Job Queue.
Anyone suggestions?
TIA!!!
HelenHere's our problem...
In one of our packages, there is a D-SQL select statement. Every time we schedule to run that package using job queque manager, it gets stuck. The SQL Tracing shows it's scanning the blocks, but it just won't finish. If we run the same sql statement (generated from the output) at the same time, it runs the result in a fly. Now if the package is run through sql+, it runs through just fine. So there must be a bug with running it in Job Queue.
Anyone suggestions?
TIA!!!
Helen -
FBU Internal Job Queue Full for Synchronous Processing of Print Requests
Hello All,
We are getting a lot of errors in system log (SM21) with the
Error : FBU Internal Job Queue Full for Synchronous Processing of Print Requests
User :SAPSYS
MNO:FBU
=============================================================
Documentation for system log message FB U :
If a spool server is to process output requests in the order they were
created using multiple work processes, the processing of requests for a
particular device can only run on one work process. To do this an
internal queue (limited size) is used.
If too many requests are created for this device too quickly, the work
process may get overloaded. This is recognized when, as in this case,
the internal queue is exhausted.
This can only be solved by reducing the print load or increasing
processor performance or, if there is a connection problem to the host
spooler, by improving the transfer of data to the host spooler.
Increasing the number of spool work processes will not help, as
requests for one device can only be processed by one work process. If
processing in order of creation is not required, sequential request
processing can be deactivated (second page of device configuration in
Transaction SPAD). This allows several work processes to process
requests from the same device thus alleviating the bottleneck.
Enlarging the internal queue will only help if the overload is
temporary. If the overload is constant, a larger queue will eventually
also be overloaded.
===========================================================
Can you please tell me how to proceed.
Best Regards,
PratyushaSolution is here:
412065 - Incorrect output sequence of output requests
Reason and Prerequisites
The following messages appear in the developer trace (dev_trc) of the SPO processes or in the syslog:
S *** ERROR => overflow of internal job queue [rspowunx.c 788]
Syslog Message FBU:
Internal job queue for synchronous request processing of output requests full
The "request processing sequence compliance" on a spool server with several SPO processes only works provided the server-internal job queue (see Note 118057) does not overflow. The size of this request queue is prepared using the rspo/global_shm/job_list profile parameter. The default value is 50 requests. However, if more output requests arrive for the spool server than can be processed (and the internal request queue is full as a result), more SPO processes are used to process the requests (in parallel), and the output sequence of the requests is no longer guaranteed.
Solution
Increase the rspo/global_shm/job_list profile parameter to a much larger value. Unfortunately, the value actually required cannot be found by "trial and error" because this queue contains all the incoming output requests on a spool server, not just the "sequence compliant" requests. A practical lower limit for this value represents the maximum sequence-compliant output requests for the above generated output device. If, for example, 1000 documents that should be output in sequence are issued from an application program to an output device, the queue must be able to hold 1000 entries so that it does not overflow if the SPO process processes the requests at a maximum low-speed. -
Hi all,
I am using project server 2007 in virtual environment. Before some time, my project server queue service was not starting & it gave Error: 1053. Then server team made some small changes in registry & resolved the problem. After that queue service
was in running condition. But when any job goes into queue, it processed the job first, then after some processing, it goes into "waiting to be processed", and after some time it again starts processing. So, the project server queue is working very
very slow. I restarted queue, event, sql, timer services, but no benefit.
In log it is showing "Queue unable to interact with SQL.".
Please Help....
Thanks.
Thanks & Regards Pradeep GangwarHi Hrishi,
ULS log is as below:
===========================================================
02/12/2013 10:19:20.99 OWSTIMER.EXE (0x0460)
0x089C
Windows SharePoint Services Timer
5uuf Monitorable
The previous instance of the timer job 'Shared Services Provider Synchronizing Job', id '{AD482C7A-A6D5-4313-A4B6-3F5A78730F61}' for service '{54B6D7E9-6F24-459E-92AC-E11FB157B119}' is still running, so the current instance will be skipped. Consider
increasing the interval between jobs.
02/12/2013 10:20:10.58 w3wp.exe (0x04C0)
0x13FC
Windows SharePoint Services General
8m90 Medium
105 heaps created, above warning threshold of 32. Check for excessive SPWeb or SPSite usage.
02/12/2013 10:20:29.99 OWSTIMER.EXE (0x0460)
0x089C
Windows SharePoint Services Timer
5uuf Monitorable
The previous instance of the timer job 'Config Refresh', id '{5BA90EA2-D960-4F00-BF99-2C9C96056FB1}' for service '{46CB2006-65AC-40C6-9B5D-E2924F18B8CD}' is still running, so the current instance will be skipped. Consider increasing the interval
between jobs.
==========================================================
And in Event Viewer, it is showing:
==========================================================
Log Name: Application
Source: Office SharePoint Server
Date: 12-02-2013 10:14:02
Event ID: 7761
Task Category: Project Server Queue
Level: Error
Keywords: Classic
User: N/A
Computer: xyz
Description:
Standard Information:PSI Entry Point:
Project User: domain\epmadmin
Correlation Id: a5e020df-ee43-417f-b47c-a1f1142fe2cf
PWA Site URL: http://xyz/PWA
SSP Name: SharedServices1
PSError: Success (0)
An unxpected exception occurred in the Project Server Queue. Queue type (Project Queue/Timesheet Queue): ProjectQ. Exception details: CompleteGroup failed.
===========================================================================================================================================================
Log Name: Application
Source: Office SharePoint Server
Date: 12-02-2013 10:14:02
Event ID: 7758
Task Category: Project Server Queue
Level: Error
Keywords: Classic
User: N/A
Computer: xyz
Description:
Standard Information:PSI Entry Point:
Project User: domain\epmadmin
Correlation Id: a5e020df-ee43-417f-b47c-a1f1142fe2cf
PWA Site URL: http://xyz/PWA
SSP Name: SharedServices1
PSError: Success (0)
Queue SQL call failed. Error: System.Data.SqlClient.SqlException: Violation of PRIMARY KEY constraint 'PK_MSP_QUEUE_PROJECT_GROUP_ARCHIVE'.
Cannot insert duplicate key in object 'dbo.MSP_QUEUE_PROJECT_GROUP_ARCHIVE'.
===========================================================================================================================================================
Log Name: Application
Source: Office SharePoint Server
Date: 12-02-2013 10:14:02
Event ID: 7754
Task Category: Project Server Queue
Level: Error
Keywords: Classic
User: N/A
Computer: xyz
Description:
Standard Information:PSI Entry Point:
Project User: domain\epmadmin
Correlation Id: a5e020df-ee43-417f-b47c-a1f1142fe2cf
PWA Site URL: http://xyz/PWA
SSP Name: SharedServices1
PSError: Success (0)
Queue unable to interact with SQL. Queue type (Project Queue, Timesheet Queue etc):
ProjectQ Exception: Microsoft.Office.Project.Server.BusinessLayer.Queue.QueueSqlException: CompleteGroup failed --->
System.Data.SqlClient.SqlException: Violation of PRIMARY KEY constraint 'PK_MSP_QUEUE_PROJECT_GROUP_ARCHIVE'.
Cannot insert duplicate key in object 'dbo.MSP_QUEUE_PROJECT_GROUP_ARCHIVE'.
==========================================================
Thanks & Regards Pradeep Gangwar -
How do I automatically backup SQL Agent jobs and SSIS packages on the mirror daily?
I have seen this question asked before but I could not find a satisfactory answer. What is the best solution to get your SQL Agent jobs/schedules/etc. and your SSIS packages on the mirror server? Here's the details:
Server A is the principal with 2 DBs mirrored over to server B. Everything is fine and dandy, DBs are synched and all good. In Disaster Recovery testing, we need to bring up server B, which now will serve as the principal. Server A is inaccessible. Now,
we need all our jobs that are setup in server A to be in server B, ready to go, but disabled. We also need all our SSIS packages properly stored. Yes, we store our packages in the MSDB in server A.
Now, I can see a few answers coming my way.
1- Backup the MSDB to server B. When you bring server B up as principal in DR, restore the MSDB. All your jobs,schedules,steps, SSIS packages will be there. Is this possible? Can this be done on server B without rendering it incapable of serving as the principal
for the mirrored DBs? My fear with this option is that there may be information in the MSDB itself about the mirroring state of the DBs? Or is all that in the Master DB? Have you tried this?
2- Right click each job, script them out, re-create them on server B... No, thank you very much. :) I am looking for an AUTOMATED, DAILY backup of the jobs and SSIS packages. Not a manual process. Yes, we do change jobs and packages quite often and doing
the job twice on two servers is not an option.
3- Use PowerShell.. Really? Are we going back to scripting at the command prompt like that, that fast?
Since I fear option number 3 will be the only way to go, any hints on scripts you guys have used that you like that does what I need to do?
Any other options?
Any help GREATLY appreciated. :-) I can be sarcastic but I am a good guy..
Raphael
rferreiraI would go with option number 3. Once you have a script simple run it....
param([string]$serverName,[string]$outputPath)
function script-SQLJobs([string]$server,[string]$outputfile)
[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | Out-Null
$srv = New-Object Microsoft.SqlServer.Management.Smo.Server("$server")
$db = New-Object Microsoft.SqlServer.Management.Smo.Database
$scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv)
$scrp.Options.ScriptDrops = $FALSE
$scrp.Options.WithDependencies = $TRUE
$jobs = $srv.JobServer.get_Jobs()
$jobs=$jobs | Where-Object {$_.Name -notlike "sys*"}
foreach($job in $jobs)
$script=$job.Script()
$script >> $outputfile
"GO" >> $outputfile
---script-SQLJobs "SQLSRV12" "C:\Jobs\test.txt"
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Blog:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance -
SQL Server jobs fail if I signout the Remote Desktop connection
I have some SSIS jobs running in the production server which we usually take that server by Windows' Remote Desktop Connection to monitor the jobs. The problem in our case is If we sign out the server in remote connection, all the sql server jobs getting failed
until we reestablish a remote connection but the jobs work fine if we close the remote connection explicitly by the (x) mark on the Remote Connection interface
Any idea on this issuealways same error like the following
Code: 0xC0014009 Source: Connection manager "Invoice"
Description: There was an error trying to establish an Open Database Connectivity (ODBC) connection with the database server. End Error Error: 2015-01-12 08:00:05.26 Code: 0x0000020F Source: DFT GetData [2]
Description: The AcquireConnection method call to the connection manager Invoice failed with error code 0xC0014009. There may be error messages posted before this with more information on why the AcquireConnection method call failed. End Error Error:
2015-01-12 08:00:05.26
Code: 0xC0047017 Source: DFT GetData SSIS.Pipeline
Description: Data failed validation and returned error code 0x80004005. End Error Error: 2015-01-12 08:00:05.27
Code: 0xC004700C Source: DFT GetData SSIS.Pipeline
Description: One or more component failed validation. End Error Error: 2015-01-12 08:00:05.28
Code: 0xC0024107 Source: DFT GetData
Description: There were errors during task validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 8:00:00 AM Finished: 8:00:06 AM Elapsed: 5.678 seconds. The package execution
failed. The step failed. -
I've used Project Web App on Sharepoint 2013. After clear job queue about force check-in then I've got this error. This made me can't access some pages in PWA anymore e.g. Project center, Resoure center, PWA setting, etc. Who have ever found this problem,
pls. help.
The pop up error msg. is "Sorry, you don't have access to this page"Hi,
According to your description, after you cleared job queue about force check-in then you can't access some pages in PWA.
Maybe you need to check in the pages which has been checked out.
If you have administrative rights, it is possible to override the check-out via the View All Site Content page:
Site Actions->View All Site Content->Pages->Hover the item you want to check in, and from the context-menu (arrow-down next to the filename), choose "Discard Check Out"
Besides, to troubleshooting the error “Sorry, you don't have access to this page”, refer to the following articles:
http://sharepoint.rackspace.com/sharepoint-2013-troubleshooting-an-access-issue-with-a-custom-master-page
http://technet.microsoft.com/en-us/library/ff758656(v=office.15).aspx
In addition, as this issue is related to project server, I suggest you create a new thread on project server, more experts will assist you:
https://social.technet.microsoft.com/Forums/projectserver/en-US/home?category=project
Best Regards,
Lisa Chen
Lisa Chen
TechNet Community Support -
Job queue / Intelligent Agent / dbsnmp
Hi everybody,
we are testing Oracle8 on a Linux SuSE6.0 system with SMP.
Installing the software was ok by following the instructions on
the SuSE-website and doing some own work.
First to answer/solve a lot of questions/problems towards the
dbsnmp_start:
We also had the problems that were described before; error
message 0315 was posted in the logfiles and the dbsnmp died
immediately.
I solved the problem by installing the TCL_NEW-package of SuSE5.3
which is of version 8.0p1-11. After setting the permission to
root:dba on dbsnmp*-files the dbsnmpd starts working.
Discovering the database from our NT-machine works fine, also,
the agnet works (he can ping the db), but transmitting jobs
fails.
The error message after sending a job is "Error accessing job
queue". The job was submitted (so it seems to be) by the linux
machine but the scheduling fails.
Has anybody a solution?
Dietmar
nullHi (again)
We had success. After spending some more time on it and a lot of
brain-(cpu)-time, we have solved the problem.
Steps I have taken:
- upgrading from 8.0.5 to 8.0.5.1 by patching (as mentioned the
mail before)
- applying the lnx_patch, too (making a link to "make" for the
command "gmake" used in patch-shellscript)
- applying the patch patch2.tgz from the download section
- copying the two libraries out of the patch2 (libclient.a and
libclntsh.so.1.0 to /lib and to ORACLE_HOME/lib
- changing permissions and ownership to oracle:dba of those
libraries
Thanks to S C Maturi for his hint on patching. It was the
basement for our success.
Dietmar
Dietmar Stein (guest) wrote:
: Thanks for the advice, but the problem remains the same. I will
: add some information I forgot in the first message:
: - using standard edition 8.0.5, upgraded to 8.0.5.1.0
: - not applied the lnx_patch (did the work on myself before)
: Another question: does the problem remain with Enterprise
Edition
: (any experiences?)?
: Dietmar
: S C Maturi (guest) wrote:
: : Dietmar,
: : If you are using the 8.0.5 Standard Edition -
: : is this happening after applying the 8.0.5.1 patch ?
: : Maturi
: : Dietmar Stein (guest) wrote:
: : : Hi everybody,
: : : we are testing Oracle8 on a Linux SuSE6.0 system with SMP.
: : : Installing the software was ok by following the
instructions
: on
: : : the SuSE-website and doing some own work.
: : : First to answer/solve a lot of questions/problems towards
the
: : : dbsnmp_start:
: : : We also had the problems that were described before; error
: : : message 0315 was posted in the logfiles and the dbsnmp died
: : : immediately.
: : : I solved the problem by installing the TCL_NEW-package of
: : SuSE5.3
: : : which is of version 8.0p1-11. After setting the permission
to
: : : root:dba on dbsnmp*-files the dbsnmpd starts working.
: : : Discovering the database from our NT-machine works fine,
: also,
: : : the agnet works (he can ping the db), but transmitting jobs
: : : fails.
: : : The error message after sending a job is "Error accessing
job
: : : queue". The job was submitted (so it seems to be) by the
: linux
: : : machine but the scheduling fails.
: : : Has anybody a solution?
: : : Dietmar
null -
Firing SQL Server Job Through Java
Hi All
How can I fire a SQL Server Job from Java. I am from data warehousing background and new to java. I process my cubes though a package in DTS and have encapsulated the package in a job.
I can connect to the SQL Server where the package & job reside.
Regards
AbhinavCan it be called like a stored procedure? If so (and probably it can) look into CallableStatement... a good first step would be to see how to run it through SQl statements in an admin tool (like QueryAnalyzer) to see how it can be invoked.
If it can be invoked through SQL at all you can call it in Java. Also be aware of permission issues. -
Job queues are not executed as scheduled
When I upgraded from 9i to 10g, I also migrated my procedures into packages. These procedures were created to be executed on a job queue. I noticed that since I've migrated them into packages, the job queue's stopped executing - it doesn't even look like it tried.
I've pasted a sample of one record taken from 10/23/2008 8:39:00AM:
LAST_DATE: 10/22/2008 8:15:03AM
NEXT_DATE: 10/23/2008 8:15:00AM
BROKEN: N
WHAT: DZSP.JOBS_UTIL.UPLOAD_PLC_2MAXIMO;
INTERVAL: TRUNC(SYSDATE+1)+8.25/24
FAILURES: 0
Your assistance is greatly appreciated!Thanks for your reply.
Yes, I am using the DBMS_JOBS package. I was not aware that there was another scheduler - is there a difference or preference?
The setting for JOB_QUEUE_PROCESSES is set to 10, DEFAULT=NO.
Hope that helps.
Maybe you are looking for
-
Apple Mail - Creating rules - Unable to drag-reorder rules as stated in 'Help'
I'm trying to create a Rule in Mail that will 'forward' all messages sent one of my Accounts to another (external) Account which is on another computer. Once the email has been transferred from my iMac to the external Account, I want the message on m
-
1 month ago i got my ipod nano 2 gen. i tried to connect the ipod nano to the stereo or hifi bose 3-2-1 dvd, on the AUX connector. i use the cable from apple. it's fine but the sound comes just from the accousticmass and from the right speaker. the l
-
Serious video/audio/screen sharing problems
Ok, I have been researching this forum and all over google for about a week, and I can't figure this out! I can't connect for AV chat or screen sharing. My friend is in Ohio, I am in NJ. We can both connect to the apple tester screen names. She can c
-
Product categories Central contracts as SOS in SC
Hello, We are implementing SRM 7 Level 9 with extended classic scenario. We have created central contracts with items with Products categories only. Those central contracts do not appear when the requester has selected the same product category in th
-
I ordered BT Infinity in mid-February. I have taken countless days off of work upsetting my boss Engineers never show up when they are expected I have been given so many excuses it hurts They report fault after fault after fault, and every time they