Long running FI close jobs as a result of FI settlement to 3 cost objects
We are trying to gain some insight as to how we can deal with the large volume of data that is generated with PennDOTu2019s reclassification of costs in SAP (plant maintenance) and any impact this may have on system performance. Specifically, PennDOTu2019s current process requires settlement to 3 cost objects; the SAP standard is settlement to 2 cost objects. There are approximately 60 million line item transactions created each year by this process.
More details:
IES u2013 PennDOT Settlement Process 06/24/10
Business Requirement u2013 settle to three cost objects
1) Run SAP settlement program u2013 only settles to 2 cost objects
u2022 Three line items in COEP per G/L posting
2) Process to settle to the third cost object
a. Reverse cost object. This generates a CO line item.
b. Repost to the third cost objects. This generates a CO line item.
u2022 Six line items in COEP per G/L posting
3) Assumption: CO line items are a means to post to the third cost item (Cost Center). Only used for internal management reporting.. Not needed for any other processing or federal audit requirements.
a. These CO line items can be uniquely identified for archiving.
a. These CO line items can be uniquely identified for archiving. by BA, BT and text fields
4) The long running monthly and annual close jobs are due to the number of CO line items that are in the system. Estimated at 16 to 20 million line items per year.
4) The long running monthly and annual close jobs are due to the number of CO line items (COEP) that are in the system. Estimated at 16 to 20 million line items per year. 166,000 items created in one day.
HI,
pls chk with this link
http://www.fhwa.dot.gov/infrastructure/asstmgmt/dipa.pdf
i am digging ur issue.let me know clearly wat u need.
thanks,
karthik
Similar Messages
-
Is there a way to get long running SQL Agent jobs information using powershell?
Hi All,
Is there a way to get long running SQL Agent jobs information using powershell for multiple SQL servers in the environment?
Thanks in Advance.
--HuntI'm running SQL's to fetch the required details and store it in centralized table.
foreach ($svr in get-content "f:\PowerSQL\Input\LongRunningJobsPowerSQLServers.txt"){
$dt = new-object "System.Data.DataTable"
$cn = new-object System.Data.SqlClient.SqlConnection "server=$svr;database=master;Integrated Security=sspi"
$cn.Open()
$sql = $cn.CreateCommand()
$sql.CommandText = "SELECT
@@SERVERNAME servername,
j.job_id AS 'JobId',
name AS 'JobName',
max(start_execution_date) AS 'StartTime',
max(stop_execution_date)AS 'StopTime',
max(avgruntimeonsucceed),
max(DATEDIFF(s,start_execution_date,GETDATE())) AS 'CurrentRunTime',
max(CASE WHEN stop_execution_date IS NULL THEN
DATEDIFF(ss,start_execution_date,stop_execution_date) ELSE 0 END) 'ActualRunTime',
max(CASE
WHEN stop_execution_date IS NULL THEN 'JobRunning'
WHEN DATEDIFF(ss,start_execution_date,stop_execution_date)
> (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-History'
ELSE 'NormalRunning-History'
END) 'JobRun',
max(CASE
WHEN stop_execution_date IS NULL THEN
CASE WHEN DATEDIFF(ss,start_execution_date,GETDATE())
> (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-NOW'
ELSE 'NormalRunning-NOW'
END
ELSE 'JobAlreadyDone'
END)AS 'JobRunning'
FROM msdb.dbo.sysjobactivity ja
INNER JOIN msdb.dbo.sysjobs j ON ja.job_id = j.job_id
INNER JOIN (
SELECT job_id,
AVG
((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100)
+
STDEV
((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100) AS 'AvgRuntimeOnSucceed'
FROM msdb.dbo.sysjobhistory
WHERE step_id = 0 AND run_status = 1
GROUP BY job_id) art
ON j.job_id = art.job_id
WHERE
(stop_execution_date IS NULL and start_execution_date is NOT NULL) OR
(DATEDIFF(ss,start_execution_date,stop_execution_date) > 60 and DATEDIFF(MINUTE,start_execution_date,GETDATE())>60
AND
CAST(LEFT(start_execution_date,11) AS DATETIME) = CAST(LEFT(GETDATE(),11) AS DATETIME) )
--ORDER BY start_execution_date DESC
group by j.job_id,name
$rdr = $sql.ExecuteReader()
$dt.Load($rdr)
$cn.Close()
$dt|out-Datatable
Write-DataTable -ServerInstance 'test124' -Database "PowerSQL" -TableName "TLOG_JobLongRunning" -Data $dt}
You can refer the below link to refer out-datatable and write-dataTable function.
http://blogs.technet.com/b/heyscriptingguy/archive/2010/11/01/use-powershell-to-collect-server-data-and-write-to-sql.aspx
Once we've the table details, I'm sending one consolidated email to automatically.
--Prashanth -
Long running table partitioning job
Dear HANA grus,
I've just finished table partitioning jobs for CDPOS(change document item) with 4 partitions by hash with 3 columns.
Total data volumn is around 340GB and the table size was 32GB !!!!!
(migration job was done without disabling CD, so currently deleting data on the table with RSCDOK99)
Before partitioning, the data volumn of the table was around 32GB.
After partitioning, the size has changed to 25GB.
It took around One and half hour with exclusive lock as mentioned in the HANA adminitration guide.
(It is QA DB, so less complaints)
I thought that I might not can do this in the production DB.
Does anyone hava any idea for accelerating this task?? (This is the fastest DBMS HANA!!!!)
Or Do you have any plan for online table partitioning functionality??(To HANA Development team)
Any comments would be appreciate.
Cheers,
- JasonJason,
looks like we're cross talking here...
What was your rationale to partition the table in the first place?
=> To reduce deleting time of CDPOS (As I mentioned it was almost 10% quantity of whole Data volume, So I would like to save deleting time of the table from any pros of partitioning table like partitioning pruning)
Ok, I see where you're coming from, but did you ever try out if your idea would actually work?
As deletion of data is heavily related with locating the records to be deleted, creating an index would have probably be the better choice.
Thinking about it... you want to get rid of 10% of your data and in order to speed the overall process up, you decide to move 100% of the data into sets of 25% of the data - equally holding their 25% share of the 10% records to be deleted.
The deletion then should run along these 4 sets of 25% of data.
It's surely me, but where is the speedup potential here?
How many unloads happened during the re-partitioning?
=> It was fully uploaded in the memory before partitioning the table by myself.(from HANA studio)
I was actually asking about unloads _during_ the re-partitioning process. Check M_CS_UNLOADS for the time frame in question.
How do the now longer running SQL statements look like?
=> As i mentioned selecting/deleting increased almost twice.
That's not what I asked.
Post the SQL statement text that was taking longer.
What are the three columns you picked for partitioning?
=> mandant, objectclas, tabname(QA has 2 clients and each of them have nearly same rows of the table)
Why those? Because these are the primary key?
I wouldn't be surprised if the SQL statements only refer to e.g. MANDT and TABNAME in the WHERE clause.
In that case the partition pruning cannot work and all partitions have to be searched.
How did you come up with 4 partitions? Why not 13, 72 or 213?
=> I thought each partitions' size would be 8GB(32GB/4) if they are divided into same size(just simple thought), and 8GB size is almost same size like other largest top20 tables in the HANA DB.
Alright, so basically that was arbitrary.
For the last comment of your reply, most people would do partition for their existing large tables to get any benefit of partitioning(just like me). I think your comment can be applied for the newly inserting data.
Well, not sure what "most people" would do.
HASH partitioning a large existing table certainly is not an activity that is just triggered off in a production system. Adding partitions to a range partitions table however happens all the time.
- Lars -
Hello everyone,
Thanks for your interest in this post. I have a design question on a feature we are currently working on - I have described it below as best as I can. I would be thankful for any direction or guidance from all the good and experienced folks here in this forum :)
We have a system which receives some input data (say in the form of files or a db update) from external sources. We have a job which wakes up periodically, checks for the presence of new data and then does some parsing/data manipulation. I have posted it below and for the sake of brevity and conciseness kept it simple.
public class Work{
private SomeVar _var;
//constructor here
public void doWork(){
//run job
} Note we have multiple kind of input (so multiple Work classes each of which have different parse logic).
So far so good.
We have jdk 1.4 (no java.util.concurrent) and we dont have an option to use a scheduling f/w like quartz
Its simple enough and we dont have a need for Listeners, dynamic scheduling etc.
That leaves us with 2 options to implement this
Option 1
Create a thread for each kind of job on startup.
Each thread checks for presence of input data and creates a Work object to process the job.
public class WorkThread extends Thread{//some common functionality here}
public class SpecificWorkThread extends WorkThread{
private MetaInfo _meta;
//constructor
public void run(){
while(true){
//check for presence of input data
Work _w = new Work();
_w.doWork();
//sleep for some time
pulic class Controller{
psv main(String args[]){
//for each job{
WorkThread t = new SpecificWorkThread();
t.start();
//join on all threads
} As you can see this would create (from the main), a long running thread for each job type
Option 2
Have the input data check in the controller clas. When there is input data to parse, create a Thread which does the parsing.
The thread in its run method parses and comes out. So for each parse cycle, a short lived thread is created, somthing like below
public class WorkController{
psv main(String args[]){
while(true){
//check for input condition
new SpecificWorkThread(new Work()).start();
//sleep for sometime
public class SpecificWorkThread extends Thread{
private Work _work;
//constructor
public void run(){
_work.doWork();
The difference between the two is that while the first creates long running threads per job type, in option(2) a thread is created on demand. Each thread is short lived (it does its job and dies), but then a thread needs to be created every time for a job.
Both would work and work correctly. What I would like to understand is if the options presented above just a programmer's prefence or is one option better than the other (performance, memory considerations) etc?
Thanks for your patience in reading this post.
cheers,
ram.Generally creating a new thread is an expensive process. Well, everything is relative. My laptop can create & run & stop 7,000+ threads per second, test program below, YMMV. If you are dealing with thousands of thread creations per second, pooling may be sensible; if not, premature optimization is the root of all evil, etc.
public class ThreadSpeed
public static void main(String args[])
throws Exception
System.out.println("Ignore the first few timings.");
System.out.println("They may include Hotspot compilation time.");
System.out.println("I hope you are running me with \"java -server\"!");
for (int n = 0; n < 5; n++)
doit();
System.out.println("Did you run me with \"java -server\"? You should!");
public static void doit()
throws Exception
long start = System.currentTimeMillis();
for (int n = 0; n < 10000; n++) {
Thread thread = new Thread(new MyRunnable());
thread.start();
thread.join();
long end = System.currentTimeMillis();
System.out.println("thread time " + (end - start) + " ms");
static class MyRunnable
implements Runnable
public void run()
}Edited by: sjasja on Jan 14, 2010 2:20 AM -
Prgess indicator on long running jobs
I have an FX application that is directly linked to my database. The program allows all DML operations as well as user defined actions (action commands and various other methods). I have the same application running in Swing, SWT, Canoo ULC and al works just fine. In each of the other front end types, the application automatically displays a busy indicator when a long running job is executed. Now I need this in FX.
My application is basically a Rich Client framework which allows the same business logic and forms to have different front ends depending on customer requirements. The application is built by customers in a 4GL style development tool. The application is actually built at run time and the data is provided by the user through various services. Because I am building an FX program for a framework, I don't know when the user may execute a long running job when, for example, a button is pressed. I have full control over the retrieval and modification of data but not user interaction. I am therefore looking for a busy indicator that comes automatically when the main thread is waiting.
Any help would be great!Hi guys and thanks for your answers
I may have stretched the mark with "long running jobs" by these I mean a database query, a price calculation, order process etc. These are standard jobs that are issued on a Rich Client application. Basically I have a screen which will execute a query, and I want to give the user feedback when the query is executing so that he doesn't think that the application has hung. In Swing I have done this by creating my own event queue with a delay timer:
public class WaitCursorEventQueue extends EventQueue implements DelayTimerCallback
private final CursorManager cursorManager;
private final DelayTimer waitTimer;
public WaitCursorEventQueue(int delay)
this.waitTimer = new DelayTimer(this, delay);
this.cursorManager = new CursorManager(waitTimer);
public void close()
waitTimer.quit();
pop();
protected void dispatchEvent(AWTEvent event)
cursorManager.push(event.getSource());
waitTimer.startTimer();
try
super.dispatchEvent(event);
finally
waitTimer.stopTimer();
cursorManager.pop();
public AWTEvent getNextEvent() throws InterruptedException
waitTimer.stopTimer(); // started by pop(), this catches modal dialogs
// closing that do work afterwards
return super.getNextEvent();
public void trigger()
cursorManager.setCursor();
}I then implemented this into my application like this:
_eventQueue = new WaitCursorEventQueue(TIMEOUT);
Toolkit.getDefaultToolkit().getSystemEventQueue().push(_eventQueue);Now each time the application waits for a specific time, the cursor will become a wait timer. This give s the user a visual feedback so that he knows that the application is working and not just hung. By doing this, I do not need to wrap each user callout in a timer. Much easier like this!
I would like to implement the same in FX.
Edited by: EntireJ on Dec 15, 2011 12:34 AM -
Re:How to determine the long running jobs in a patch
Hi ,
How to determine the long running jobs in a patch .
RegardsHi,
Check the below MY ORACLE SUPPORT note:
Note.252422.1 .... Check Completed Long Running Jobs In Oracle Apps.
Best regards,
Rafi -
This is urgent please help! long running job in sm37 but not in sm66
Hi,
Can someone please help for an explanation as to why if you call sm37, you can see that the job is long running (example about 70,000 sec), but when you look at the PID where the job is running in sm66 it does not show 70,000 sec but only around 6,000 sec.
Can someone please explain why? Thank you very muchFor background processes, additional information is available for the background job that is currently running. You can only display this information, if you are logged onto the instance where the job is running, or if you choose Settings and deselect Display only abbreviated information, avoid RFC. In any case, the job must still be running.
Regards
Anilsai -
Alert monitor for long running background jobs
Hello,
I have to configure an alert moniter for long running background jobs which are running more than 20000 secs using rule based. I have created a rule based MTE and assigend MTE class CCMS_GET_MTE_BY_CLASS to virtual node but i dont find a node to specify the time.
could any one guide me how can i do this.
Thanks,
KasiHi *,
I think the missing bit is where to set the maximum runtime. The runtime is set in the collection method and not the MTE class.
process: rz20 --> SAP CCMS Technical Expert Monitors --> All Contexts on local application server --> background --> long-running jobs. Click on 'Jobs over Runtime Limits' then properties, click the methods tab then double click 'CCMS_LONGRUNNING_JOB_COLLECT', in the parameters tab you can then set the maximum runtime.
If you need to monitor specific jobs, follow the process (http://help.sap.com/saphelp_nw70/helpdata/en/1d/ab3207b610e3408fff44d6b1de15e6/content.htm) to create the rule based monitor, then follow this process to set the runtime.
Hope this helps.
Regards,
Riyaan.
Edited by: Riyaan Mahri on Oct 22, 2009 5:07 PM
Edited by: Riyaan Mahri on Oct 22, 2009 5:08 PM -
Can a long running batch job causing deadlock bring server performance down
Hi
I have a customer having a long running batch job (approx 6 hrs), recently we experienced performance issue where the job now taking >12 hrs. The database server is crawling. Looking at the alert.log showing some deadlock,
The batch job are in fact many parallel child batch job that running at the same time, that would have explain the deadlock.
Thus, i just wondering any possibility that due to deadlock, can cause the whole server to be crawling, even connect to the database using toad is also getting slow or doing ls -lrt..
Thanks
Rgds
UngKok Aik wrote:
According to documentation, complex deadlock can make the job appeared hang & affect throughput, but it didn't mentioned how it will make the whole server to slow down. My initial thought would be the rolling back and reconstruct of CR copy that would have use up the cpu.
I think your ideas on rolling back, CR construction etc. are good guesses. If you have deadlocks, then you have multiple processes working in the same place in the database at the same time, so there may be other "near-deadlocks" that cause all sorts of interference problems.
Obviously you could have processes queueing for the same resource for some time without getting into a deadlock.
You can have a long running update hit a row which was changed by another user after the update started - which woudl cause the long-running update to rollback and start again (Tom Kyte refers to this as 'write consistency' if you want to search his website for a discussion on the topic).
Once concurrent processes start sliding out of their correct sequences because of a few delays, it's possible for reports that used to run when nothing else was going on suddenly finding themselves running while updates are going on - and doing lots more reads (physical I/O) of the undo tablespace to take blocks a long way back into the past.
And so on...
Anyway, according to the customer, the problem seems to be related to the lgpr_size as the problem disappeared after they revert it back to its orignial default value,0. I couldn't figure out what the lgpr_size is - can you explain.
Thanks
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"Science is more than a body of knowledge; it is a way of thinking" Carl Sagan -
Ccms monitoring - Long running job
I checked in CMMS monitoring (RZ20) version ECC 6.0, there is no option to monitor long running background job.
How to monitor long running background job using transaction RZ20?Hi,
Check this [link|Re: How to monitor long running background job.;
Thanks
Sunny -
Long Running Jobs based on average time of last 5 run
Hi Experts,
I need a query to find out the Long Running Jobs, based on average time of last 5 run.
Could you please help me.
Thanks in advance.
--------------------------------- Devender BijaniaSELECT
[sJOB].[name] AS [JobName]
, CASE
WHEN [sJOBH].[run_date] IS NULL OR [sJOBH].[run_time] IS NULL THEN NULL
ELSE CAST(
CAST([sJOBH].[run_date] AS CHAR(8))
+ ' '
+ STUFF(
STUFF(RIGHT('000000' + CAST([sJOBH].[run_time] AS VARCHAR(6)), 6)
, 3, 0, ':')
, 6, 0, ':')
AS DATETIME)
END AS [LastRunDateTime]
, CASE [sJOBH].[run_status]
WHEN 0 THEN 'Failed'
WHEN 1 THEN 'Succeeded'
WHEN 2 THEN 'Retry'
WHEN 3 THEN 'Canceled'
WHEN 4 THEN 'Running' -- In Progress
END AS [LastRunStatus]
, STUFF(
STUFF(RIGHT('000000' + CAST([sJOBH].[run_duration] AS VARCHAR(6)), 6)
, 3, 0, ':')
, 6, 0, ':')
AS [LastRunDuration (HH:MM:SS)]
, CASE [sJOBSCH].[NextRunDate]
WHEN 0 THEN NULL
ELSE CAST(
CAST([sJOBSCH].[NextRunDate] AS CHAR(8))
+ ' '
+ STUFF(
STUFF(RIGHT('000000' + CAST([sJOBSCH].[NextRunTime] AS VARCHAR(6)), 6)
, 3, 0, ':')
, 6, 0, ':')
AS DATETIME)
END AS [NextRunDateTime]
FROM
[msdb].[dbo].[sysjobs] AS [sJOB]
LEFT JOIN (
SELECT
[job_id]
, MIN([next_run_date]) AS [NextRunDate]
, MIN([next_run_time]) AS [NextRunTime]
FROM [msdb].[dbo].[sysjobschedules]
GROUP BY [job_id]
) AS [sJOBSCH]
ON [sJOB].[job_id] = [sJOBSCH].[job_id]
LEFT JOIN (
SELECT
[job_id]
, [run_date]
, [run_time]
, [run_status]
, [run_duration]
, [message]
, ROW_NUMBER() OVER (
PARTITION BY [job_id]
ORDER BY [run_date] DESC, [run_time] DESC
) AS RowNumber
FROM [msdb].[dbo].[sysjobhistory]
WHERE [step_id] = 0
) AS [sJOBH]
ON [sJOB].[job_id] = [sJOBH].[job_id]
AND [sJOBH].[RowNumber] = 1
ORDER BY [LastRunDateTime] desc,
[LastRunDuration (HH:MM:SS)] DESC
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Trigger Alert for Long Running Jobs in CPS
Hello,
I am currently trying to make a trigger so that I can monitor the long running jobs in the underlying ERP. Can you help me on the APIs to use?
Im trying to modify my previous alert - Chekcing of failed jobs
// only check error jobs
if (jcsPostRunningContext.getNewStatus().equals(JobStatus.Error)) {
String alertList = "email address ";
String [] group = alertList.split(",");
for (int i = 0; i < group.length; i++) {
JobDefinition jobDefinition = jcsSession.getJobDefinitionByName("System_Mail_Send");
Job aJob = jobDefinition.prepare();
aJob.getJobParameterByName("To").setInValueString(group<i>);
aJob.getJobParameterByName("Subject").setInValueString("Job " + jcsJob.getJobId() + " failed");
aJob.getJobParameterByName("Text").setInValueString(jcsJob.getDescription());
Im trying to look for the API so I can subtract the system time and the start time of the job and compare it to 8 hours?
if (jcsPostRunningContext.getNewStatus().equals(JobStatus.Error)) { <-- Can I have it as ( ( Start Run Time - System Time ) > 8 Hours )
Or is there an easier way? Can somebody advise me on how to go about this one?Hi,
You can do it using the api:
if ((jcsJob,getRunEnd().getUTCMilliSecs() - jcsJob.getRunStart().getUTCMilliSecs()) > (8*24*60*1000))
This has the drawback that you will only be notified when the job finally ends (maybe more then 8 hours!).
The more easier and integrated method is using the Runtime Limits tab on your Job Definition or Job Chain. This method can raise an event when the Runtime Limit is reached. The event can trigger your notification method.
Regards Gerben -
How to research a long running job from 3 days ago
Re: How to research a long running job from 3 days ago
Client called to say that a job that normally runs for 6 hours ran for 18 hours on 11/01. 11/01 was a Saturday, and end of month. The long running job writes to a log and I can from the log that that the problem started right around 10:43am. Every step
before 10:43 was taking the normal amount of time. Then at 10:43 a step that takes seconds hung for 12 hours. After 12 hours the step finished and the job completed successfully.
I looked at the SQL Log, Event Log, Job History (for all jobs). What else can I look at to try and resolve an issue that happened on 11/01/2014?It does execute an SSIS package.
Personally I feel this as kind of bug in SSIS package but I am not expert in SSIS so I would move it to SSIS forum. Please update your question giving complete information what SSIS package does.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
I do prefer using the home button. Will it somehow be 'damaged' in the long run? I just need assurance. *TIA.
Any mechanical component is subject to failure. If it does fail, if your device is in hardware warranty, it would likely be replaced. If you are still within the first year, purchase AppleCare to extend the warranty for an additional year.
-
Long running threads (Jasper Reports) and AM-Pooling
Hi,
we are developing quite large application with ADF an BC. We have quite a lot of reports generated through Jasper that take quite long time to complete. The result is a PDF document that user gets on the UI so he can download it over download link. Reports that take over an hour to finish are never completed and returned to the user on UI. I think the problem is in AM-Polling because we are using default AM-Polling settings:
<AM-Pooling jbo.ampool.maxinactiveage="600000" jbo.ampool.monitorsleepinterval="600000" jbo.ampool.timetolive="3600000"/>
The AM is destroyed or returned to pool before reports finishes. How to properly configure those settings that even long running threads will do there jobs to the end.
We also modified web.xml as follows:
<session-config>
<session-timeout>300</session-timeout>
</session-config>
Any help appreciated.
Regards, TadejYour problem is not related to ADF ApplicationModules. AMs are returned to the pool no earlier than the end of request, so for sure they are not destroyed by the framework while the report is running. The AM timeout settings you are referring to are applicable only to idle AMs in the pool but not to AMs that have been checked out and used by some active request.
If you are using MS Internet Explorer, then most probably your problem is related to the IE's ReceiveTimeout setting, which defines a timeout for receiving a response from the server. I have had such problems with long running requests (involving DB processing running for more than 1 hour) and solved my problem by increasing this timeout. By default this timeout is as follows:
IE4 - 5 minutes
IE5, 6, 7, 8 - 60 minutes
I cannot find what the default value is for IE9 and IE10, but some people claim it is only 10 seconds, although this information does not sound reasonable and reliable! Anyway, the real value is hardly greater than 60 minutes.
You should increase the ReceiveTimeout registry value to an appropriate value (greater than the time necessary for your report to complete). Follow the instructions of MS Support here:
Internet Explorer error "connection timed out" when server does not respond
I have searched Internet for similar timeout settings for Google Chrome and Mozilla Firefox, but I have not found anything, so I instructed my customers (who execute long-running DB processing) to configure and use IE for these requests.
Dimitar
Maybe you are looking for
-
Closing the Application Browser.
Hi Experts, I had been Posting on the forum about closing of the Application Window. Thanks a lot for the ideas,the purpose is yet to be solved. Let me explain my requirement in detail. I have an application in which there are View Layouts. I am able
-
FYI Hi. Updated in May to another release of Reader XI. Got the problem stated in the titled of this post - with access denied for PDF on network shares. Reason: Long time error in Adobe Reader with shares and thumbnails causes me to use the thumbnai
-
Totem Movie Player not playing files online
well Hello. I have one problem, when i try to play Video online, like this site: http://www.lrt.lt/ltv-LT-high.asx (it's for Lithuanian users) tv online, to watch olimpiad, i get nothing, it just auto cancel playing. Well then i tryed to add that Lin
-
I need to know the department column in Oracle Application
Hi there: Can somebody tell me how can I get the employee department information from oracle application. I need to know the table name and column name. Thanks and regards, Zeeshan
-
I had to restore Snow Leopard from Mavericks - should I upgrade to Mountain Lion?
Hi, I installed Mavericks from Snow Leopard and it corrupted my computer (MacBook Pro 2011). I restored the backup and I don't want to try Mavericks again. I have a copy of Mountain Lion that I never installed. Should I install Mountain Lion or sho