Web.ProcessBatchData performance issue - optimised way to insert 20000 records.
Hi all,
I am trying following code for inserting the 20000 items into sharepoint list.
I have used the "Thread.Sleep(999999999)" for complete the processbatch inserting the items.
But after some time I am getting error: "Thread was being aborted"
I want to optimize the performance OR Is there any other way to use the thread with process batch.
Can anyone please help me out here how to do ?
Code:
string BatchToInsertPRProducts = string.Empty;
BatchToInsertPRProducts = BuildBatch(row, spPRProductList.ID);
string BatchPRProductReturn = web.ProcessBatchData(BatchToInsertPRProducts);
Thread.Sleep(999999999);
Thanks,
Harish Patil
Hi Harish Patil:
In a basic text editor such as Notepad, open the web.config file for example %SYSTEMDRIVE%\Inetpub\wwwroot -or- %SYSTEMDRIVE%\\Inetpub\wwwroot\wss\VirtualDirectories\80 folder
ress CTRL + F to open the Find dialog box.
Find the following tag:
<httpRuntime maxRequestLength="51200" />
Replace it with this tag:
<httpRuntime executionTimeout="6000" maxRequestLength="51200" />
Similar Messages
-
Performance issue of frequently data inserted tables
Hi all,
Have a table named raw_trap_store having columns as trap_id(number,PK),Source_IP(varchar2), OID(varchar2),Message(CLOB) and received_time(date).
This table is partitioned across 24 partitions where partitioning column being received_time. (every hour's data will be stored in each partition).
This table is getting inserted with 40-50 records/sec on an average. Overall number of records for a day will be around 2.8-3 million. Data will be retained for 2 days.
No updates will be happening on this table.
Performance issue:N
Need a report which involves selection of records from this table based on certain values of Source IP (filtering condition on source_ip column).
Need a report which involves selection of records from this table based on certain values of OID (filtering condition on OID column).
But, if i create an index on SourceIP and OID column, then inserts are getting slow. (have created a normal index. not partitioned index)
Please help me to address the above issue.Giving the nature of your report (based on Source_IP and OID) and the nature of your table partitioning (range partitioned by received_time), you have already made a good decision to create these particular indexes as a normal (b-tree or global) and not locally partitioned. Because if you have locally partitioned them, your reports will not eliminate partitions (because they do not include the partition key in their where clause) and hence your index range scans will scan all 24 partitions generating a lot of logical I/O
That is said, remember that generally we insert once and select many times. You have to balance that. If you are sure that it is the creation of your two indexes that has decreased the insert performance then you may set them at in an unusable state before the insert and rebuild them after. But this might be a good advice only if the volume of data to be inserted is much bigger than the existing volume of data before the insert.
And if you are not deleting from the table and the table does not contain triggers and integrity constraints (like FK constraint) then you can opt for a direct path insert using the hint /*+ append */
Best regards
Mohamed Houri
<mod. action: removed unecessary blog ref.>
Message was edited by: Nicolas.Gasparotto -
Webi Reports - Performance Issues
Hi Experts,
Right now we are using BO XI R2 version. We have 2 servers, server 1 is old and server 2 is new (AIX server u2013 new upgrade of old server).
When I trying to schedule the report (webi) in both server, reports is running successfully. But problem is that the report scheduling time is more in new server (AIX) than old server (Server1).
There is some performance issues
Example:
Old serve : 1 hrs (time taken)
New server : 2 hrs (time taken)
Could you please tell me how to increase the webi report performance in new server?
Regards,
Sridharan KrishnanHi,
How to enable Excel and Pdf option under Save as file in infoview.
When i trying to click modify option under public folder reports ,Report is getting open but i am not able to save that report as excel or pdf , since those option is disabled in infoview.
But it is enabled in user private folder Reports.
Right now we have upgraded the objects from XI R2 to BO 3.1, Since there is some difference in security rights in 3.1, Please tell me how to fix it.
BO Version u2013 3.1
Regards,
Sridharan -
Best way to Insert Millions records in SQL Azure on daily basis?
I am maintaining millions of records in Sql Server 2008 R2 and now i am intended to migrate these on SQL Azure.
In existing system with SQL Server 2008 R2, few SSIS packages and Stored Procedures are firstly truncate the existing records and then perform Insert operation on the table which holds
approx 26 Million records in 30 mins. on Daily basis (as system demands).
When i migrate these on SQL Azure, i am unable to perform these operations in a
faster way as i did in SQL 2008. Sometimes i got Request timeout error.
While searching for faster way, many of them suggest for Batch process or BCP. But Batch processing is NOT suitable in my case because it takes much time to insert those records. I required some faster and efficient way on SQL Azure.
Hoping for some good suggestions.
Thanks in advance :)
Ashish Narnoli+1 to Frank's advice.
Also, please upgrade your Azure SQL Database server to
V12 as you will receive higher performance on the premium tiers. As you scale-up your database for your bulk insert, remember that
SQL Database charges by the hour. To minimize costs, scale back down when the inserts have completed. -
Webi Report Performance issue as compared with backaend SAP BW as
Hi
We are having Backend as SAP BW.
Dashboard and Webi reports are created with Backend SAP BW.
i.e thru Universe on top of Bex query and then webi report and then dashboard thru live office.
My point is that when we have created webi reprts with date range as parameter(sometimes as mandatory variable which comes as prompt in webi) or sometimes taking L 01 Calender date from Bex and creating a prompt in webi, we are facing that reports are taking lot of time to open. 5 minutes 10 minutes and sometimes 22 minutes to open.
This type of problem never ocurred when Backened was Oracle.
Also when drilling in webi report,it takes lot of time .
So can you suggest any solution?Hi Gaurav,
We logged this issue with support already, it is acknowledged.
What happens is that whenever you use an infoobject in the condition
(so pull in the object into condition and build a condition there,
or use that object in a filter object in the universe and then use the filter)
this will result in that object being added to the result set.
Since the query will retrieve a lot of different calendar days for the period you selected,
the resultset will be VERY big and performance virtually non-existent.
The workaround we used is to use a BEX variable for all date based selections.
One optional range variable makes it possible to build various types of selections
(less (with a very early startdate), greater (with a very far in the future enddate) and between).
Because the range selection is now handled by BEX, the calendar day will not be part of the selection...
Good luck!
Marianne -
Web Reporting Performance issue
Hello All,
We have some webreports, which contains "Dropdown Boxes" for filters. When I execute the report it is taking very long time, and i realized it is due to filling the filter(characterstic) values with dropdown boxes... Query execution time is more..
To solve this problem, I have removed all the "Dropdown boxes" for filters, and used "Navigation Block" with all the filters. After that the performance of the Webreport(initial screen) is improved by more than 50%.
Now the issue is, when I click on one of the filters of the navigation block, to open filter values window, it is again taking long time.
Is there anyway to improve the response time to open the filter values window of the Navigation block filter?
Is my approach to improve the performance of a webreport is suggestable or is there any other way?
Please suggest me.
Thanks
raviHi Ravi,
try to use <param name="BOOKED_VALUES" value="Q"> in your Dropdown Box properties.
If you use more than one DDB in your webtemplate, you can send the request to the
server until all dropdown boxes with a sumbit button.
Here is the code example for this solution:
<form name="form_1" method="post" action="<SAP_BW_URL DATA_PROVIDER='DATAPROVIDER_1'
FILTER_IOBJNM_1=MYOBJ_1 FILTER_IOBJNM_2='MYOBJ_2'>">
<select name="FILTER_VALUE_1" size="1">
<object>
<param name="OWNER" value="SAP_BW">
<param name="CMD" value="GET_ITEM">
<param name="NAME" value="DROPDOWNBOX_1">
<param name="ITEM_CLASS" value="CL_RSR_WWW_ITEM_FILTER_DDOWN">
<param name="DATA_PROVIDER" value="DATAPROVIDER_1">
<param name="GENERATE_CAPTION" value="">
<param name="IOBJNM" value="MYOBJ_1">
<param name="ONLY_VALUES" value="X">
<param name="BOOKED_VALUES" value="Q">
ITEM: DROPDOWNBOX_1
</object>
</select>
<select name="FILTER_VALUE_2" size="1">
<object>
<param name="OWNER" value="SAP_BW">
<param name="CMD" value="GET_ITEM">
<param name="NAME" value="DROPDOWNBOX_2">
<param name="ITEM_CLASS" value="CL_RSR_WWW_ITEM_FILTER_DDOWN">
<param name="DATA_PROVIDER" value="DATAPROVIDER_1">
<param name="GENERATE_CAPTION" value="">
<param name="IOBJNM" value="MYOBJ_2">
<param name="ONLY_VALUES" value="X">
<param name="BOOKED_VALUES" value="Q">
ITEM: DROPDOWNBOX_2
</object>
</select>
<input type="submit" value="Submit ">
</form>
rgds Jens -
Accessing BPEL processes via a proxy web service performance issues
Hello,
I have more BPEL processes implemented, each such a process implementing business functionality in a certain domain (generally, a domain has more business processes).
The request was to provide a single web service for each domain. It means that all the business methods (processes) in the same domain should be accessed through the same web service. This request doesn't make possible to expose the BPEL processes themselves as web services that could be directly consumed by different clients of the application.
The alternative will be to implement the "domain" web services through a Java class, for instance. With this approach, the Java based domain web services will expose the needed business methods to the clients. And the Java class will get the request input parameters and will call the corresponding BPEL process via SOAP. This scenario would be fine, but... this approach would imply a supplementary marshalling/unmarshalling process at the domain web service level. The data returned by the BPEL processes could be very large and in such a situation the Java based domain web service will introduce an important performance drawback.
Is there any other solution to this case that will allow the using of a "proxy" domain web services that will not introduce any important drawback via marshalling/unmarshalling?
Many thanks in advance!
Regards,
MarinelHello,
First, thank you Sandor for your answer.
I understand that it is possible to create a BPEL process that exposes multiple operations/messages. This would be exactly what I need: a single process (web service) that will expose many operations. Could anyone, please, point me to such an example?
So far I thought that there is possible to have only one operation exposed with a BPEL process, what is delimited between the receive/reply blocks (in the synchronous case).
Regards,
Marinel -
1 core VS multi core in a web application: performance issue
Hi,
I'm having trouble with a web application in a multi cpu server (w2ksp4, iis+wl9.2)
I have prepared a set of JMeter stress tests, and the application is only capable to finish 5 transactions in a multi cpu (2 cpus with 2 cores each) but if I bind the JVM of the weblogic process to only 1 core, then the application can handle more than 60 transactions without errors.
I'm in production side; developers tell me "hardware problem" but it seems more likely a poorly designed application (as per my previous experience with them)
The syntoms are lot of null pointers exceptions and threads stuck when in multi core scenario.
Althought I have not put lot of details, any of you have ever seen something similar?
If anybody needs further information please feel free to ask
Thanks,
AntonioWhat operating system are you using?
make sure you are trying a certificated configuration JDK and OS.
Oracle Fusion Middleware Supported System Configurations
If using unix/Linux OS based you migh be hitting low entropy issue, you can add
-Djava.security.egd=file:/dev/./urandom to JAVA_OPTIONS and retest the issue
Best Regards
Luz -
Web Application Performance Issue (WLS 12)
Hello guys,
I am new to weblogic and i recently downloaded weblogic server 12.1.3.0 and i created a domain and successfully deployed my web application.
My web app is Java Based (Mainly Servlets , DWR Requests ..), the problem is that it takes for ever to start and it is very slow .
I deployed it on Tomcat 7 previously and had no problem it was very quick.
I tried changing the JVM Arguments (xms , xmx and XX:MaxPermSize) but no luck with that .
Kindly can anyone help me with this issue .
Thanks in Advance.What operating system are you using?
make sure you are trying a certificated configuration JDK and OS.
Oracle Fusion Middleware Supported System Configurations
If using unix/Linux OS based you migh be hitting low entropy issue, you can add
-Djava.security.egd=file:/dev/./urandom to JAVA_OPTIONS and retest the issue
Best Regards
Luz -
Tidal 6.1 Web GUI performance issues
Hello,
We are noticing significant delays in the Web GUI of Tidal 6.1. We tried all usual suspects like hardware size, logs, etc, etc. and still no improvement in performance. We have about 5000 jobs and our infrastructure is windows based. Does any of you experienced this problem and how did you address it?
thank you
RajWe are a small shop (about 1,000 jobs) and we were extremely slow. I worked with one of the engineers and here are the changes we made that helped us a lot - we have 1 master (8 GB), one CM (12 GB), windows based, external Oracle DB. Pretty much these changes are based on size. Hope this helps somebody!
tes-6.1.dsp
*** changes made
CacheSynchronizer.StreamCommitSize from 1000 to 3000
DataCache.ReadConnectionsMin from 5 to 10
DataCache.ReadConnectionsMax from 10 to 15
DataCache.WriteConnectionsMin from 5 to 10
DataCache.WriteConnectionsMax from 10 to 15
DataCache.PageCacheSize from 16384 to 32768
DataCache.ConnectionPoolMinSize from 5 to 10
DataCache.ConnectionPoolMaxSize from 10 to 15
DataCache.StatementCacheSize from 750 to 1500
ClientNode.MinSessionPoolSize from 5 to 10
ClientNode.MaxSessionPoolSize from 10 to 15
Clientmgr.props
*** changes made
JVMARGS=-Xms8192m -Xmx8192m -XX:PermSize=128m -XX:MaxPermSize=128m to
JVMARGS=-Xms10240m -Xmx10240m -XX:PermSize=1024m -XX:MaxPermSize=1024m
ClientSession.MinSessionPoolSize from 5 to 10
ClientSession.MaxSessionPoolSize from 10 to 15
DataSource.MinSessionPoolSize from 5 to 10
DataSource.MaxSessionPoolSize from 10 to 15
Master.props
*** changes made
MessageBroker.MemoryLimit from 256 to 1024
MessageBroker.StoreLimit from 4096 to 12288
ClientConnection.MinSessionPoolSize from 5 to 10
ClientConnection.MaxSessionPoolSize from 10 to 15 -
A way to insert multiple records at a time
Hi All,
Suppose I have two tables A, B and C with following syntax.
A(aid,nameA)
B(bid,nameB)
C(aid,bid)(A mapping between A to B)
Now I want to insert into C with one value from A mapped to all values from B. But nameB has duplicate vales. So I want to map the aid only to unique bids in B. Can anyone shed some light?
Thanks In Advance,
JJTry
insert into C (AID, BID)
(select unique <x>, BID from B) ;where <x> is the AID value (assuming you have a specific AID you want to map to all available unique BIDs).
Edit: Sorry, misread your problem. A bit more complicated then (you need a rule to mitigate BID clashes, I use min()):
insert into C (AID, BID)
(select <x>, min(BID) from B group by NAMEB) ;If you need to map every AID to every BID with distinct NAMEB value, do something like this (cross-product):
insert into C (AID, BID)
(select AID, BID from A,(select min(BID) BID from B group by NAMEB)) ;-Sp
Message was edited by:
garfae -
Performance Issue - 2.5 hrs of 37000 Records...
Hello Experts,
Good Day to all...
Following is the Anonymous block which i am using to execute a procedure logic...
Execution time is resulting more... Around 2.5 hrs for 37000 records in departments table.
Moreover there is a procedure - *"validations.move_process_history"* which has lots of DBMS_OUTPUT messages been stored and i thought of storing into a temporary table.
Will this increases the performance??
Plz suggest.
Thanks for ur suggestions...
DECLARE
CURSOR cur_get_rec
IS
select
di.location_type
,di.location_no
,di.dept_id
from departments di
where di.location_no = 600
and di.location_type = 'CH'
and di.dept_id > 0;
l_cnt NUMBER := 0;
l_error_msg varchar2(32678);
BEGIN
FOR i IN cur_get_rec
LOOP
validations.move_process_history
p_loc_typ => i.location_type, --> Passing IN to procedure
p_loc_nbr => i.location_no, --> Passing IN to procedure
p_dept_id => i.dept_id, --> Passing IN to procedure
p_ts_start => (SYSTIMESTAMP - 200), --> Passing IN to procedure
p_ts_end => SYSTIMESTAMP, --> Passing IN to procedure
p_mode => 0, --> Passing IN to procedure
p_result => l_cnt, --> Passing OUT ; 1 - Success or 0 - Failure
p_error_msg => l_error_msg --> Passing OUT from procedure i.e error messages
END LOOP;
END;
validations.move_process_history Procedure
PROCEDURE move_process_history
p_loc_typ departments.loc_type%TYPE
p_loc_nbr departments.loc_nbr%TYPE
p_dept_id departments.dept_id%TYPE
p_ts_start TIMESTAMP
p_ts_end TIMESTAMP
p_mode NUMBER
p_result OUT NUMBER
p_error_msg OUT VARCHAR2
) IS
CURSOR c_hist IS
SELECT
FROM history
WHERE ...
--TYPE g_hist_table IS TABLE OF history%ROWTYPE;
-- Global Collection of PL/SQL Table for Holding History Records
l_hist_table g_hist_table ;
Begin
dbms_output.put_line('opening cursor');
p_result := 0 ;
OPEN c_hist;
LOOP
dbms_output.put_line('fetching ..');
-- Fetch 2000 history records for processing
FETCH c_hist BULK COLLECT
INTO l_hist_table limit 10000;
FOR i IN 1..l_hist_table.COUNT
Loop
If...then
else
p_error_msg := 'Dept# '||l_hist_table(i).p_loc_nbr ||' Effective Timestamp '||l_hist_table(i).Timestamp ||' Sequence #'
||l_hist_table(i).sequence || ' Mode Id '||l_hist_table(i).mode_id ;
end if;
if... then
else
p_error_msg := 'Dept# '||l_hist_table(i).p_loc_nbr ||' Effective Timestamp '||l_hist_table(i).Timestamp ||' Sequence #'
||l_hist_table(i).sequence || ' Mode Id '||l_hist_table(i).mode_id ;
end if;
End Loop;
-- Flush out the data from memory and to Databasse
l_hist_table.delete;
Exception When Others then
rollback;
-- Cleanup
close c_hist;
l_hist_table.delete;
raise;
End move_process_history;Thanks Billy for your prompt reply.
I am so glad to see your reply.
If you could have some sample eg. of Parallel processing, definitely it will help me in this case.
Moreover to be more specific; departments & History table has 37000 records. --> Reply to Mustafa KALAYCI
I will explain you the complete logic of query below
-- Below code is called in-order to update the information related to the (location_no-600) in the dept_history table.
-- cur_get_rec i.e Departments table contain 37000 records
DECLARE
CURSOR cur_get_rec
IS
select
di.location_type
,di.location_no
,di.dept_id
from departments di
where di.location_no = 600
and di.location_type = 'CH'
and di.dept_id > 0;
l_cnt NUMBER := 0;
l_error_msg varchar2(32678);
BEGIN
FOR i IN cur_get_rec
LOOP
validations.move_process_history
p_loc_typ => i.location_type, --> Passing IN to procedure
p_loc_nbr => i.location_no, --> Passing IN to procedure
p_dept_id => i.dept_id, --> Passing IN to procedure
p_ts_start => (SYSTIMESTAMP - 200), --> Passing IN to procedure
p_ts_end => SYSTIMESTAMP, --> Passing IN to procedure
p_mode => 0, --> Passing IN to procedure
p_result => l_cnt, --> Passing OUT ; 1 - Success or 0 - Failure
p_error_msg => l_error_msg --> Passing OUT from procedure i.e error messages
END LOOP;
END;VALIDATIONS.MOVE_PROCESS_HISTORY PROCEDURE
-- Pass all 37000 records for processing.
PROCEDURE move_process_history
p_loc_typ departments.loc_type%TYPE
p_loc_nbr departments.loc_nbr%TYPE
p_dept_id departments.dept_id%TYPE
p_ts_start TIMESTAMP
p_ts_end TIMESTAMP
p_mode NUMBER
p_result OUT NUMBER
p_error_msg OUT VARCHAR2
) IS
CURSOR c_hist IS
SELECT
FROM history
WHERE loc_typ = p_loc_typ
AND loc_nbr = p_loc_nbr
AND dept_id = p_dept_id
AND eff_ts BETWEEN p_ts_start AND p_ts_end
ORDER BY eff_ts ASC;
--TYPE g_hist_table IS TABLE OF history%ROWTYPE;
-- Global Collection of PL/SQL Table for Holding History Records
l_hist_table g_hist_table ;
l_abs_row_number NUMBER := 0;
BEGIN
DBMS_OUTPUT.put_line ('opening cursor');
p_result := 0;
OPEN c_sih;
LOOP
-- Flush out the Record Collection if it is not empty
DBMS_OUTPUT.put_line ('fetching ..');
FETCH c_sih
BULK COLLECT INTO l_hist_table LIMIT 10000;
FOR i IN 1 .. l_hist_table.COUNT
LOOP
l_abs_row_number := l_abs_row_number + 1;
IF (l_abs_row_number = 1)
THEN
-- Initialize the Values to local variable
ELSE
p_error_msg := 'Dept# '||l_hist_table(i).p_loc_nbr ||' Effective Timestamp '||l_hist_table(i).Timestamp ||' Sequence #'
||l_hist_table(i).sequence || ' Mode Id '||l_hist_table(i).mode_id ;
END IF;
IF (p_error_msg IS NULL)
THEN
p_error_msg := 'Dept# '||l_hist_table(i).p_loc_nbr ||' Effective Timestamp '||l_hist_table(i).Timestamp ||' Sequence #'
||l_hist_table(i).sequence || ' Mode Id '||l_hist_table(i).mode_id ; --
END IF;
END LOOP;
-- Flush out the data from memory and to Databasse
l_hist_table.DELETE;
IF ( p_mode = validation_correction_rollback
OR p_mode = validation_correction_commit
THEN
-- calling procedure which Updates History table
update_history;
-- Calling procedure which updates the Departments table
update_departments;
END IF;
-- Delete the Temp Collection for Update
-- Reset Counters
l_ctr := 0;
EXIT WHEN c_sih%NOTFOUND;
END LOOP;
-- Cleanup
CLOSE c_sih;
l_hist_table.DELETE;
IF (p_mode = validation_correction_commit)
THEN
DBMS_OUTPUT.put_line ('Comitting');
COMMIT;
ELSE
DBMS_OUTPUT.put_line ('Rolling Back');
ROLLBACK;
END IF;
p_result := l_abs_row_number;
EXCEPTION
WHEN OTHERS
THEN
ROLLBACK;
-- Cleanup
CLOSE c_sih;
l_hist_table.DELETE;
RAISE;
END move_process_history; -
Performance issue while updating the custom infotype records
Hi All
i have the info type with 80,000 records my requirement is to change the infotype wage type value to the certain hard coded values .
after populating the final internal table , in loop i am passing the records and updating the info type by using the hr_infotype_operation function module
i have done the coding like bellow .
loop at lt_infotype assigning < x>
at new pernr.
bapi_employee_enquee
endat.
hr_infotype_operation
at end of pernr .
bapi_employee_dequee.
endat.
end loop.
but it is taking nearly 15 hours to update all the records .please suggest me better solution for reducing the execution time .
Thanks & Regards ,
pramodh.mThe delay problem can be solved with HR_INFOTYPE_OPERATION by using it with another FM i.e. HR_PSBUFFER_INITIALIZE the delay happens because of buffer problem. Use HR_PSBUFFER_INITIALIZE inside the loop just before you call HR_INFOTYPE_OPERATION and you wont see that much delay.
-
Hello all:
When I was in a project, I was told that HashSet was once used but abandoned since it has a performance issue in operations like insert or update. Is it true? and why is it? I don't know about the implementation of HashSet, maybe it's slow for the same reason like ArrayList?
And if it's the case, can we write a customized set and how?
Thanks,
SwayfathomBoat wrote:
I don't know about the implementation of HashSet, maybe it's slow for the same reason like ArrayList?ArrayList slow ???
Filling an HashSet is slower than ArrayList because HashSet has to check if an item is already present before adding it (very usefull to prevent duplicate entries).
Here's a small test you're free to complete if you want to test other methods :
public static void main(String[] args) {
List<Integer> list = new ArrayList<Integer>();
Set<Integer> set = new HashSet<Integer>();
// Test on ArrayList
long startL = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
list.add(Integer.valueOf(i - (i%2)));
long endL = System.currentTimeMillis();
System.out.println("List\t(" + list.size() + ")---> \t" + (endL-startL) + " millisec.");
// Test on HashSet
long startS = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
set.add(Integer.valueOf(i - (i%2)));
long endS = System.currentTimeMillis();
System.out.println("Set \t(" + set.size() + ")---> \t" + (endS-startS) + " millisec.");
}Output :
List (100000)---> 39 millisec.
Set (50000)---> 112 millisec. -
Hi,
I am having some performance issue with regards to insert in the APS appliance. I am joining 6 tables ( 1 fact and 5 dimension). The fact table has 429 M records. I tried with left outer joins with the key (all integers) and it generate a system memory
error. The fact has 24 month data so I tried with one month and it gave the same error. Then I run the "explain" syntax and change the SQL query to "right outer join " per the PDW suggestion and there is no change. Once I reduce the columns
to 10 from 55 (which I want in the report fact) and get the result for one month only (for 185 M), it generates the result in 27 min. I ran the statistics and reran the query and the time reduce to 17 min. The fact table
is distributed with columnstore index and it has 20 columns. All other dim table has 20 to 30 columns and I am joining all and getting the report fact.
Where I am doing wrong? Why this query is taking so long? Any help will be highly appreciated. If there is any need for more clarifications please let me know.
Thanks,The error was due to lack of storage. The admin increased the space allocated for the table . I have the main issues that the query is just taking a lot of time. Just give you the perspective, please see below the time. I am also including the
error and SQL (I have selected fields from FACT and DIM).
Month Count Time
1 18,588,096 27:46 MIN (RAN STATISTICS) 17:30 sec
2 18,870,292 18:21
3 18,599,067 16:58
4 18,461,490 16:38
/* system Error*/
sg 110802, Level 16, State 1, Line 22
An internal DMS error occurred that caused this operation to fail. Details: Exception: Microsoft.SqlServer.DataWarehouse.DataMovement.Workers.DmsSqlNativeException, Message: SqlNativeBufferBufferBulkCopy.WriteToServer, error in OdbcWriteBuffer: SqlState: , NativeError: 0, 'Error calling: bcp_batch(pConn->GetHdbc()) | SQL Error Info: SrvrMsgState: 0, SrvrSeverity: 0, | Error calling: pBcpConn->WriteBuffer(pBuffer, bufferOffset, bufferLength, pRowsWritten) | state: FFFF, number: 55833, active connections: 9', Connection String: Driver={SQL Server Native Client 11.0};APP=DmsNativeWriter:MMPDDM-CMP05\sqldwdms (6760) - ODBC;Trusted_Connection=yes;AutoTranslate=no;Server=MMPDDM-SQLCMP05,1500
--Explain
SELECT
FACT.*,DIM1.*,DIM2.*,DIM3.*,DIM4.*,DIM5.*,DIM.*,DIM7.*
FROM FACT
LEFT OUTER JOIN dbo.DIM1 ON DIM1.DIM1_SEQ_NBR = FACT.DIM1_SEQ_NBR
LEFT OUTER JOIN dbo.DIM2 ON DIM2.DIM2_SEQ_NBR = FACT.DIM2_SEQ_NBR
LEFT OUTER JOIN dbo.DIM3 ON DIM3.DIM3_SEQ_NBR = FACT.DIM3_SEQ_NBR
LEFT OUTER JOIN dbo.DIM4 ON DIM4.DIM4_SEQ_NBR = FACT.DIM4_SEQ_NBR
LEFT OUTER JOIN dbo.DIM5 ON DIM5.DIM5_SEQ_NBR = FACT.DIM5_SEQ_NBR
LEFT OUTER JOIN dbo.DIM6 ON DIM6.DIM6_SEQ_NBR = FACT.DIM6_SEQ_NBR
LEFT OUTER JOIN dbo.DIM7 ON DIM7.DIM7_SEQ_NBR = FACT.DIM7_SEQ_NBR
WHERE FACT.DIM2_SEQ_NBR = 18 -- (FOR ONE MONTH)
OPTION (HASH JOIN);
Maybe you are looking for
-
Using CVI libraries in Visual Studio C DLL
Hello, I am attempting to enhance a legacy VS 2008 Visual C based DLL to use NI Shared Variable/Network Variable support. From what I have found in the documentation, it seems to imply that CVI libraries can be imported into an existing VS by using
-
I need help. I would like to get all my music back. Thank you
-
Is it possiable to play HD contant on a 2007 mac mini?
-
How to query MP for SMS provider
Hi , How can i query with remote MP for SMS provider or site server location with vbscript. I am trying to connect to SMS provider from task sequence for updating some data in WMI class instance.
-
All po's released in a given date range:EXIT M06E0004,logic?
Hai, After lot of browsing in the forum,I came to the conclusion that to capture ALL RELEASED PO in a date range one ahs to use CDHDR/CDPOS or implement EXIT and populate ZTABLE. Now i have seen the exit M06E0004 triggers upon lot of actions(change