To update large dataset in columnar database (Sybase IQ)
Hi,
I want to update a column with random values in Sybase IQ.The no of rows are very large(approx 2 crore).
I have created a procedure using cursor.
it is working fine with low dataset but having performance issue with large dataset.
Is there a workaround for this issue.
regards,
Neha Khetan
Hi Eugene,
Is it possible to implement this in BDB JE somehowYes, you can create a new separate database for storing the sets of integers. Each record in this database would be one partition (e.g., 1001-2000) for one record in the "main" database.
The key to this database would be a two part key:
- the key to the "main" database, followed by
- the beginning partition value (e.g., 1001)
For example:
Main Database:
Key Data
X string/integer parameters for X
Y string/integer parameters for Y
Integer Partition Database:
Key Data
X,1 Set of integers in range 1-1000 for X
X,1001 Set of integers in range 1001-2000 for X
Y,1 Set of integers in range 1-1000 for Y
Y,1001 Set of integers in range 1001-2000 for Y
...Two part keys are easy to implement with a tuple binding. You simply read/write the two fields for the record key, one after another, in the same way that you read/write multiple fields in the record data.
Mark
Similar Messages
-
Mail freezes when updating large RSS feed boxes
This is something I encountered and how I have formulated a work around. I have discovered that Mail will freeze when updating large RSS feed boxes.
A little history. After discovery RSS and how they work, I began building a collection of feeds. They included feeds from many major news providers. Over the years, the number articles form some of these sources went into the thousands.
As my database of articles grew, I encountered more severe and more frequent freezes of the application Mail. I began to take the computer off-line in order to work in these files. But inevitably, a situation would arise that would lead to the Mail application freezing.
Isolation of the issue with Mail and to the RSS boxes within Mail. The freeze would not occur when RSS feeds boxes where collapsed. Also, the freeze only affected the Mail application. Mac OS was not affected and I was able to alway close Mail from with in the Force Quit menu. Also, the Force Quit menu affirmed that Mail was indeed frozen by listing it as also "not responding."
Work around. To resolve this issue, I first choose to remove the number of RSS feed that I had subscribed to but used very infrequently. Second, I choose to delete old feeds from "never" to "every two weeks" from within the Mail preference menu (RSS sub-menu).
I think it took a while for Mail to fully delete messages older than two weeks. In fact, when I began deleting whole feeds, it took some time for those feeds to be removed from my mail box tree. Within the Activity Monitor application, I could see that a lot of disk use was occurring, even though the OS was allowing me to continue to use Mail and other applications.
To assist this process, I took my computer off-line and stepped away from it. Upon my return, the disk use was down to normal, the number of article in many RSS boxes where greatly reduced, and my disk had recovered over a GB of space. Also, Mail seems to be behaving properly, with smooth and quick performance.
If you found this article, I hope the information provided has been helpful! After a quick search in to previous posts, an entirely similar post was not found. However, others are finding the Mail application will freeze, but not necessary for the same reason.Since I don't want to download any attachments from RSS feeds in Mail.app, is there any way to turn off the attachment download once and for all? I also get the beach ball for minutes when an item has a big attachment, and I fear my HD is cluttered with files I don't use.
-
How to update the Dataset of a report running with universe?
I am facing some issue in cross-tab.
My requirement is , on click of any cell in cross-tab i want to convert it into editable cell.
Converting into editable cell is something which I achieved using below code:
function onCellClick(e) {
var text=this.innerHTML;
var text = $.trim(this.innerHTML);
$('<input />').attr({ type: 'text', name: 'text' }).appendTo($(this)).val(text).select().blur(
function () {
var newText = $(this).val();
$(this).parent().text(newText).find('input').remove();
var rIndex=$(this).closest('tr').index();
data.data[rIndex]=newText;
that.firePropertiesChanged(["data"]);
//that.firePropertiesChanged(["visSelection"]);
that.fireEvent("onSelect");
I just modified the sample code.
My report is running with universe.
Now I want to update the Dataset with this updated value.
Can anyone provide any help on the same?
ThanksHi Michael,
You got it right.
Let me tell you the whole story:
I have a weird requirement of creating editable grid. And the values which I edit into grid get saved into database.
By using javascript I am able to edit grid cell item into HTML.
Now after this I have two hitches:
1. I am not able to get the updated cell values in Design Studio. I think this issues goes to SDK side. I try to create a external script variable and use this into Design Studio.But somehow it always throw blank.
2. I am not able to update dataset. I know updating the dataset permanently is not possible as its getting created from universe. But I just want to update the dataset so that any change in measure values also update total of that column.
I start this thing this week only. So might be I am asking few stupid question. Please bear with me.
Thanks
Amit -
Updating array data in sql database
HI,
Im facing problems in updating array data in SQL database.
As of now, i am able to write an "insert" query and insert array data in an image datatype field. Im using image datatype because the array size is very big(around 80,000 x and y values).
Althoug inserting data is easy im unable to write a query to update this data.
Referring to the help of SQL server and Labview database connectivity toolkit, i came across a method of accessing image datatype....using textpointers, which are 16 bit binary values and using the WRITETEXT function instead of the UPDATE function.
but the problem im facing is that ive to pass the array as a 2d string array in the query as a result the updated array is retrieved in the form of a string a
nd not as an array. how do I get over this problem?Hi Pavitra,
I'm not very clear on how you have inserted the data into your application, but I do know that when you call the UPDATETEXT or WRITETEXT function you use the TEXTPOINTERS to point to the first location of a 1d array. So, depending on how you've stored the data, you may have problems updating your data if you're looking at it as a 1d array instead of how you originally formatted it. If you are able to successfully access the data as a 1d array, you can use the database variant to data type vi and pass in a string array constant for the data type. This will convert the variant datatype into whatever you specify. You may have to index the row and column of the variant (you receive a 2d array of variant) first before you convert. If possible, can yo
u provide some more detail and maybe some example code of how you perform the insert and plan to do the update? I can probably give you a better solution if I know how you are formatting the data. Thanks!
Jeremy L.
National Instruments
Jeremy L.
National Instruments -
How to update field values in a database table using module pool prg?
hi
how to update field values in a database table using module pool prg?
we created a customized table, and we put 2 push buttons in screen painter update and display.
but update is not working?
data is enter into screen fields and to internal table, but it is not updated in database table.
thanks in adv
vidyaHI,
we already used the update statement. but its not working.
plz check this.
*& Module Pool ZCUST_CALL_REC
PROGRAM ZCUST_CALL_REC.
TABLES: ZCUST_CALL_REC,ZREMARKS.
data: v_kun_low like ZCUST_CALL_REC-kunnr ,
v_kun_high like ZCUST_CALL_REC-kunnr,
v_bud_low like ZCUST_CALL_REC-budat,
v_bud_high like ZCUST_CALL_REC-budat.
ranges r_kunnr for ZCUST_CALL_REC-kunnr .
ranges r_budat for zcust_call_rec-budat.
DATA: ITAB TYPE STANDARD TABLE OF ZCUST_CALL_REC WITH HEADER LINE,
JTAB TYPE STANDARD TABLE OF ZREMARKS WITH HEADER LINE.
*data:begin of itab occurs 0,
MANDT LIKE ZCUST_CALL_REC-MANDT,
kunnr like ZCUST_CALL_REC-kunnr,
budat like ZCUST_CALL_REC-budat,
code like ZCUST_CALL_REC-code,
remarks like ZCUST_CALL_REC-remarks,
end of itab.
*data:begin of Jtab occurs 0,
MANDT LIKE ZCUST_CALL_REC-MANDT,
kunnr like ZCUST_CALL_REC-kunnr,
budat like ZCUST_CALL_REC-budat,
code like ZCUST_CALL_REC-code,
remarks like ZCUST_CALL_REC-remarks,
end of Jtab.
CONTROLS:vcontrol TYPE TABLEVIEW USING SCREEN '9001'.
CONTROLS:vcontrol1 TYPE TABLEVIEW USING SCREEN '9002'.
*start-of-selection.
*& Module USER_COMMAND_9000 INPUT
text
MODULE USER_COMMAND_9000 INPUT.
CASE sy-ucomm.
WHEN 'BACK' OR 'EXIT' OR 'CANCEL'.
SET SCREEN 0.
LEAVE SCREEN.
CLEAR sy-ucomm.
WHEN 'ENQUIRY'.
perform multiple_selection.
perform append_CUSTOMER_code.
PERFORM SELECT_DATA.
call screen '9001'.
WHEN 'UPDATE'.
perform append_CUSTOMER_code.
PERFORM SELECT_DATA.
call screen '9002'.
perform update on commit.
WHEN 'DELETE'.
perform append_CUSTOMER_code.
PERFORM SELECT_DATA.
call screen '9002'.
ENDCASE.
ENDMODULE. " USER_COMMAND_9000 INPUT
*& Module STATUS_9000 OUTPUT
text
MODULE STATUS_9000 OUTPUT.
SET PF-STATUS 'ZCUSTOMER'.
SET TITLEBAR 'xxx'.
ENDMODULE. " STATUS_9000 OUTPUT
*& Module USER_COMMAND_9001 INPUT
text
MODULE USER_COMMAND_9001 INPUT.
CASE sy-ucomm.
WHEN 'BACK' OR 'EXIT' OR 'CANCEL'.
SET SCREEN 0.
LEAVE SCREEN.
CLEAR sy-ucomm.
endcase.
ENDMODULE. " USER_COMMAND_9001 INPUT
*& Module STATUS_9001 OUTPUT
text
MODULE STATUS_9001 OUTPUT.
SET PF-STATUS 'ZCUSTOMER'.
SET TITLEBAR 'xxx'.
move itab-MANDT to zcust_call_rec-MANDT.
move itab-kunnr to zcust_call_rec-kunnr.
move itab-budat to zcust_call_rec-budat.
move itab-code to zcust_call_rec-code.
move itab-remarks to zcust_call_rec-remarks.
vcontrol-lines = sy-dbcnt.
ENDMODULE. " STATUS_9001 OUTPUT
*& Module USER_COMMAND_9002 INPUT
text
module USER_COMMAND_9002 input.
CASE sy-ucomm.
WHEN 'BACK' OR 'EXIT' OR 'CANCEL'.
SET SCREEN 0.
LEAVE SCREEN.
CLEAR sy-ucomm.
WHEN 'UPDATE'.
perform move_data.
UPDATE ZCUST_CALL_REC FROM TABLE ITAB.
IF SY-SUBRC = 0.
MESSAGE I000(0) WITH 'RECORDS ARE UPDATED'.
ELSE.
MESSAGE E001(0) WITH 'RECORDS ARE NOT UPDATED'.
ENDIF.
WHEN 'DELETE'.
perform move_data.
DELETE ZCUST_CALL_REC FROM TABLE ITAB.
IF SY-SUBRC = 0.
MESSAGE I000(0) WITH 'RECORDS ARE DELETED'.
ELSE.
MESSAGE E001(0) WITH 'RECORDS ARE NOT DELETED'.
ENDIF.
endcase.
endmodule. " USER_COMMAND_9002 INPUT
*& Module STATUS_9002 OUTPUT
text
module STATUS_9002 output.
SET PF-STATUS 'ZCUSTOMER1'.
SET TITLEBAR 'xxx'.
endmodule. " STATUS_9002 OUTPUT
*& Module update_table OUTPUT
text
module update_table output.
move itab-MANDT to zcust_call_rec-MANDT.
move itab-kunnr to zcust_call_rec-kunnr.
move itab-budat to zcust_call_rec-budat.
move itab-code to zcust_call_rec-code.
move itab-remarks to zcust_call_rec-remarks.
vcontrol-lines = sy-dbcnt.
endmodule. " update_table OUTPUT
***Selection Data
FORM SELECT_DATA.
SELECT mandt kunnr budat code remarks FROM zcust_call_rec INTO
table itab
WHERE kunnr IN r_kunnr AND BUDAT IN R_BUDAT.
ENDFORM.
****append vendor code
FORM APPEND_CUSTOMER_CODE.
clear r_kunnr.
clear itab.
clear r_budat.
refresh r_kunnr.
refresh itab.
refresh r_kunnr.
IF r_kunnr IS INITIAL
AND NOT v_kun_low IS INITIAL
AND NOT v_kun_high IS INITIAL.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
input = v_kun_low
IMPORTING
OUTPUT = r_kunnr-low.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
input = v_kun_high
IMPORTING
OUTPUT = r_kunnr-high.
r_kunnr-option = 'BT'.
r_kunnr-sign = 'I'.
append r_kunnr.
PERFORM V_BUDAT.
ELSEIF r_kunnr IS INITIAL
AND NOT v_kun_low IS INITIAL
AND v_kun_high IS INITIAL.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
input = v_kun_low
IMPORTING
OUTPUT = r_kunnr-low.
r_kunnr-SIGN = 'I'.
r_kunnr-OPTION = 'EQ'.
APPEND r_kunnr.
PERFORM V_BUDAT.
ELSEIF r_kunnr IS INITIAL
AND v_kun_low IS INITIAL
AND NOT v_kun_high IS INITIAL.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
input = v_kun_low
IMPORTING
OUTPUT = r_kunnr-low.
r_kunnr-SIGN = 'I'.
r_kunnr-OPTION = 'EQ'.
APPEND r_kunnr.
PERFORM V_BUDAT.
ELSEIF r_kunnr IS INITIAL
AND v_kun_low IS INITIAL
AND v_kun_high IS INITIAL.
IF SY-SUBRC = 0.
MESSAGE I003(0) WITH 'ENTER CUSTOMER NUMBER'.
CALL SCREEN '9000'.
ENDIF.
PERFORM V_BUDAT.
ENDIF.
ENDFORM.
FORM V_BUDAT.
IF R_BUDAT IS INITIAL
AND NOT v_BUD_low IS INITIAL
AND NOT v_BUD_high IS INITIAL.
r_budat-low = v_bud_low.
r_budat-high = v_bud_high.
r_budat-option = 'BT'.
r_budat-sign = 'I'.
append r_budat.
ELSEIF R_BUDAT IS INITIAL
AND NOT v_BUD_low IS INITIAL
AND v_BUD_high IS INITIAL.
r_budat-low = v_bud_low.
r_budat-high = v_bud_high.
r_budat-option = 'EQ'.
r_budat-sign = 'I'.
append r_budat.
ELSEIF R_BUDAT IS INITIAL
AND v_BUD_low IS INITIAL
AND NOT v_BUD_high IS INITIAL.
r_budat-HIGH = v_bud_HIGH.
r_budat-option = 'EQ'.
r_budat-sign = 'I'.
append r_budat.
ELSEIF R_BUDAT IS INITIAL
AND v_BUD_low IS INITIAL
AND v_BUD_high IS INITIAL.
IF SY-SUBRC = 0.
MESSAGE I002(0) WITH 'ENTER POSTING DATE'.
CALL SCREEN '9000'.
r_budat-low = ''.
r_budat-option = ''.
r_budat-sign = ''.
ENDIF.
ENDIF.
ENDFORM.
*& Form update
text
--> p1 text
<-- p2 text
form update .
commit work.
endform. " update
*& Form move_data
text
--> p1 text
<-- p2 text
form move_data .
clear itab.
refresh itab.
move-corresponding zcust_call_rec to itab.
MOVE ZCUST_CALL_REC-MANDT TO ITAB-MANDT.
MOVE ZCUST_CALL_REC-KUNNR TO ITAB-KUNNR.
MOVE ZCUST_CALL_REC-BUDAT TO ITAB-BUDAT.
MOVE ZCUST_CALL_REC-CODE TO ITAB-CODE.
MOVE ZCUST_CALL_REC-REMARKS TO ITAB-REMARKS.
APPEND ITAB.
delete itab where kunnr is initial.
endform. " move_data
thanks in adv
vidya -
Multiple updates on dataset = concurrecny exception
I am getting concurrency exceptions whenever I try to update a dataset more than once. For example, if I fill a dataset, modify data, and then call the data.update method, it works fine. However, if I fill a dataset, modify data, call data.update, modify more data, and then call dataset.update again, I get a concurrency excetpion. I know that the table can be updated with no problem, and I do not get this problem on other data providers. It seems that ODP requires you to requery your dataset after each update, which doesn't seem correct to me. I tried requerying a dataset after each update and that does indeed work, but it seems unnecessary. A sample code snippet is below. Can someone let me know what I am missing.
In the snippet below, the "dataSet" variable is a dataset which has already been filled and one records has already been modified.
Dim command As New OracleCommand
command.Connection = connection
command.CommandText = sqlhelper.getStatement("evaluation", "update")
'first insert
adapter.TableMappings.Clear()
adapter.TableMappings.AddRange(sqlhelper.evaluation_table_mapping)
Dim cb As New OracleCommandBuilder(adapter)
adapter.SelectCommand = command
adapter.Update(dataSet, "egov_evaluation")
'2nd insert
'change some data
CType(dataSet.Tables(0).Rows(0), EvaluationDataSet.egov_evaluationRow).gov_agencies_attendees = "new test update"
adapter.TableMappings.Clear()
adapter.TableMappings.AddRange(sqlhelper.evaluation_table_mapping)
Dim newcb As New OracleCommandBuilder(adapter)
adapter.SelectCommand = command
adapter.Update(dataSet, "egov_evaluation")Followup:
This issue was caused by the SQL statement generated by the command builder. For those who haven't seen it yet, the command builder will generate a sql statement like "UPDATE <table> set <column1> = <value1>, <column2> = <value2> where <column1> = <orig column1 value> and <column2> = <orig column2 value>"
So if any of the fields don't match up in the where clause, you get the concurrency exception. We were getting stale data, and another issue caused by a bad query generated from the command builder, but this is worked around by writing our own update statements.
The only annoying thing is that our app doesn't care about concurrency, so we actually wanted to turn off the concurrency checking in the where clause, but, such is life. -
Update A Column In A Database Table.
I am unable to update a column in a database table.
For example; I have ten records in an EMP table without having any EMPNO. I want to UPDATE (insert) 10 different EMPNO in a table. How can I do it? All I know is that there are ten records in the table; this means that I cannot use a WHERE clause with different criteria for each row.
Thanks.Try something like this
SQL> select * from emp_1
2 /
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 19-APR-87 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 23-MAY-87 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10
14 rows selected.
SQL> insert into emp_1(empno, ename, job, mgr, hiredate, sal, comm, deptno)
2 select null, ename, job, mgr, hiredate, sal, comm, deptno
3 from emp_1
4 where rownum <= 10
5 /
10 rows created.
SQL> select * from emp_1
2 /
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 19-APR-87 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 23-MAY-87 1100 20
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10
SMITH CLERK 7902 17-DEC-80 800 20
ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
WARD SALESMAN 7698 22-FEB-81 1250 500 30
JONES MANAGER 7839 02-APR-81 2975 20
MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
BLAKE MANAGER 7839 01-MAY-81 2850 30
CLARK MANAGER 7839 09-JUN-81 2450 10
SCOTT ANALYST 7566 19-APR-87 3000 20
KING PRESIDENT 17-NOV-81 5000 10
TURNER SALESMAN 7698 08-SEP-81 1500 0 30
24 rows selected.
SQL> select max(empno) from emp_1
2 /
MAX(EMPNO)
7934
SQL> create sequence emp_seq start with 7935
2 /
Sequence created.
SQL> update emp_1 set empno = emp_seq.nextval where empno is null
2 /
10 rows updated.
SQL> select * from emp_1
2 /
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 19-APR-87 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 23-MAY-87 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10
7935 SMITH CLERK 7902 17-DEC-80 800 20
7936 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7937 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7938 JONES MANAGER 7839 02-APR-81 2975 20
7939 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7940 BLAKE MANAGER 7839 01-MAY-81 2850 30
7941 CLARK MANAGER 7839 09-JUN-81 2450 10
7942 SCOTT ANALYST 7566 19-APR-87 3000 20
7943 KING PRESIDENT 17-NOV-81 5000 10
7944 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
24 rows selected. -
Is anyone working with large datasets ( 200M) in LabVIEW?
I am working with external Bioinformatics databasesa and find the datasets to be quite large (2 files easily come out at 50M or more). Is anyone working with large datasets like these? What is your experience with performance?
Colby, it all depends on how much memory you have in your system. You could be okay doing all that with 1GB of memory, but you still have to take care to not make copies of your data in your program. That said, I would not be surprised if your code could be written so that it would work on a machine with much less ram by using efficient algorithms. I am not a statistician, but I know that the averages & standard deviations can be calculated using a few bytes (even on arbitrary length data sets). Can't the ANOVA be performed using the standard deviations and means (and other information like the degrees of freedom, etc.)? Potentially, you could calculate all the various bits that are necessary and do the F-test with that information, and not need to ever have the entire data set in memory at one time. The tricky part for your application may be getting the desired data at the necessary times from all those different sources. I am usually working with files on disk where I grab x samples at a time, perform the statistics, dump the samples and get the next set, repeat as necessary. I can calculate the average of an arbitrary length data set easily by only loading one sample at a time from disk (it's still more efficient to work in small batches because the disk I/O overhead builds up).
Let me use the calculation of the mean as an example (hopefully the notation makes sense): see the jpg. What this means in plain english is that the mean can be calculated solely as a function of the current data point, the previous mean, and the sample number. For instance, given the data set [1 2 3 4 5], sum it, and divide by 5, you get 3. Or take it a point at a time: the average of [1]=1, [2+1*1]/2=1.5, [3+1.5*2]/3=2, [4+2*3]/4=2.5, [5+2.5*4]/5=3. This second method required far more multiplications and divisions, but it only ever required remembering the previous mean and the sample number, in addition to the new data point. Using this technique, I can find the average of gigs of data without ever needing more than three doubles and an int32 in memory. A similar derivation can be done for the variance, but it's easier to look it up (I can provide it if you have trouble finding it). Also, I think this funtionality is built into the LabVIEW pt by pt statistics functions.
I think you can probably get the data you need from those db's through some carefully crafted queries, but it's hard to say more without knowing a lot more about your application.
Hope this helps!
Chris
Attachments:
Mean Derivation.JPG 20 KB -
Update XDP datasets after fillong and saving static forms
Hi all,
I've got many static XFA forms, which does not behave the same way after filling-in and saving in Acrobat. Mosto of them updates the XDP datasets entry inside the XFA, but some of them does not. The result is, that after extracting the XDP from the pdf Form, the datasets contains just the default values and not the filled-in values.
Are there any rules for updating the datasets entry shen saving filled form?
Thanks for any help
JozefHi all,
I've got many static XFA forms, which does not behave the same way after filling-in and saving in Acrobat. Mosto of them updates the XDP datasets entry inside the XFA, but some of them does not. The result is, that after extracting the XDP from the pdf Form, the datasets contains just the default values and not the filled-in values.
Are there any rules for updating the datasets entry shen saving filled form?
Thanks for any help
Jozef -
I'm new to VS. I have run the following code. It does not produce any error, and it does not add or update data to my Access database table.
dbUpdate("UPDATE prgSettings SET varValue='test' WHERE varSetting='test'")
Function dbUpdate(ByVal _SQLupdate As String) As String
Dim OleConn As New OleDbConnection(My.Settings.DatabaseConnectionString.ToString)
Dim oleComm As OleDbCommand
Dim returnValue As Object
Dim sqlstring As String = _SQLupdate.ToString
Try
OleConn.Open()
MsgBox(OleConn.State.ToString)
oleComm = New OleDbCommand(sqlstring, OleConn)
returnValue = oleComm.ExecuteNonQuery()
Catch ex As Exception
' Error occurred while trying to execute reader
' send error message to console (change below line to customize error handling)
Console.WriteLine(ex.Message)
Return 0
End Try
MsgBox(returnValue)
Return returnValue
End Function
Any suggestions will be appreciated.
Thanks.You code looks pretty good, at a quick glance. Maybe you can simplify things a bit.
For Insert, please see these samples.
http://www.java2s.com/Code/CSharp/Database-ADO.net/Insert.htm
For Update, please see these samples.
http://www.java2s.com/Code/CSharp/Database-ADO.net/Update.htm
Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.
Best to keep samples here to VB.NET
Please remember to mark the replies as answers if they help and unmark them if they provide no help, this will help others who are looking for solutions to the same or similar problem. -
Dear team support,
I have a problem with my WhatsApp Messenger.
my whatsapp wont save message history. couse error.
Error: Sqlite Error (schema update):
net.rim.device.api.database.DatabaseException: SELECT name FROM sqlite_master WHERE type = 'index' AND name = 'chat_history_jid_index': disk I / O error (10).
Please advise me how can i solve my memory card issue..
Thanksls -l /var/run/lighttpd/
And how are you spawning the php instances? I don't see that in the daemons array anywhere.
EDIT: It looks like the info in that page is no longer using pre-spawned instances, but lighttpd adaptive-spawn. The documentation has been made inconsistent it looks like.
You will note that with pre-spawned information, the config looks different[1].
You need to do one or the other, not both (eg. choose adaptive-spawn, or pre-spawn..not both).
[1]: http://wiki.archlinux.org/index.php?tit … oldid=8051 "change" -
Issues when Downloading Large Datasets to Excel and CSV
Hi,
Hoping someone could lend a hand on the issues described below.
I have a prompted dahsboard that, dependent upon prompts selected, can return detail datasets. THe intent of this dashboard is to AVOID giving end users Answers Access, but still providing the ability to pull large amounts of detail data in an ad-hoc fashion. When large datasets are returned, end users will download the data to thier local machines and use excel for further analysis. I have tried two options:
1) Download to CSV
2) Download data to Excel
For my test, I am uses the dashboard prompts to return 1 years (2009) worth of order data for North America, down to the day level of granularity. Yes alot of detail data...but this is what many "dataheads" at my organization are requesting...(despite best efforts to evangelize the power of OBIEE to do the aggregation for them...). I expext this report to return somewhere around 200k rows...
Here are the results:
1) Download to CSV
Filesize: 78MB
Opening the downloaded file is failrly quick...
126k rows are present in the CSV file...but the dataset abruptly ends in Q3(August) 2009. The following error appears at the end of the incomplete dataset:
<div><script language="javascript" src="res/b_mozilla/browserdom.js"></script><script language="javascript" src="res/b_mozilla/common.js"></script><div class="ErrorMessage">Odbc driver returned an error (SQLFetchScroll).</div><div style="margin-top:2pt" onclick="SAWMoreInfo(event); return false;"><img class="ErrorExpanderImg" border="0" src="res/sk_oracle10/common/errorplus.gif" align="absmiddle"> Error Details<div style="margin-left:15px;display:none" compresssrc="res/sk_oracle10/common/errorminus.gif">
<div class="ErrorCodes">Error Codes: <span dir="ltr">OPR4ONWY:U9IM8TAC</span></div>
<div style="margin-top:4pt"><div class="ErrorSubInfo">State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred.
[nQSError: 46073] Operation 'stat()' on file '/opt/apps/oracle/obiee/OracleBIData/tmp/nQS_31951_2986_15442940.TMP' failed with error: (75) ,Çyô@BÀŽB@B¨Ž¡pÇôä재ü5HB. (HY000)</div></div></div></div></div>
2) Download to Excel
Filesize: 46MB
Opening the Excel file is extremely painful...over 20 minutes to open the file...making excel unusable during the opening process...defeinately not acceptable for end users.
When opened the file contains only 65k rows...when there should be over 200k...
Can you please help me understand the limitations of detail data output (downloading) from OBIEE...or provide workarounds for the circumstances above?
Thanks so much in advance.
Adam
Edited by: AdamM on Feb 9, 2010 9:01 PM
Edited by: AdamM on Feb 9, 2010 9:02 PM@chandrasekhar:
Thanks for your response. I'll try with the export button but also willing to know how to create button on toolbar.And by clicking on that button a popup box will come having two radio buttons asking to download the report either in .xls or in .csv format. I am looking for the subroutines for that.
Thanks.
Message was edited by:
cinthia nazneen -
What advantages do Spry datasets have over database datasets
I have been studying the Spry Dataset and have found it terribly difficult to understand what use it is.
Can anyone explain why I should use it rather than using a database?
On the face of it, Databases are much easier to construct and access, so why use an alternative system.
From what I understand, I have to create a basic database table on an HTML page before I can attampt to create a spry dataset.
Or even use the dataset of a database to create a Spry dataset. I appreciate that there are other methods.
There are lots of articles telling one how to use it, but none stating what it is for and why one should use it - at least, none that I can find.
Where are the advantages of this system, or are you all using it just for technologies sake?
Or is it just an Adobe thing?whatalotofrubbish wrote:
I have been studying the Spry Dataset and have found it terribly difficult to understand what use it is.
Can anyone explain why I should use it rather than using a database?
On the face of it, Databases are much easier to construct and access, so why use an alternative system.
From what I understand, I have to create a basic database table on an HTML page before I can attampt to create a spry dataset.
Or even use the dataset of a database to create a Spry dataset. I appreciate that there are other methods.
There are lots of articles telling one how to use it, but none stating what it is for and why one should use it - at least, none that I can find.
Where are the advantages of this system, or are you all using it just for technologies sake?
Or is it just an Adobe thing?
Hello,
There are big difference between Datasets and databases. While I disagree with your point that Databases are easier to construct instead of Datasets. I understand your point. You should not see Datasets and databases as rivals of each others but more of a extension of each other. You use a database to store all your data and contents. When you execute a query on your database your will get a result set, or dataset. This usually the place when Spry could come in. You have data output from your database, now you just place it static on your page. Nothing wrong with that. But if you wish to create a interactive page it will usually require you to build allot of round trips to your server to present your data in different ways.
With Spry these round trips can be handled on the client side. Once your users have received your data for the Spry Data Set there are allot things they can do with it, and usually faster than doing round trips to the server. For example your can sort and filter data sets, display information in a master and detail layout or even create different page states using the same data with out having to reload the page. This will create a more seamless experience for the user.
For example our company is currently developing a search result page based on Spry Datasets. On the server side we have our database clusters that out the data for a search query. But this is usually 200+ result rows. In a traditional static website it would take a while for the user to digg through all results. Either by navigating to a next page, or filtering the dataset using forms.
With Spry we now have our result set outputted as JSON (one of the formats Spry supports as data source for the Spry Data Set) we download it all to the client. And that is all they need. All sorting, filtering and pagination is done client side. So no more round trips to the server ( less server stress, and its faster ).
I could go on about for hours but I hope this will give a general point of usages. If not take a look at the Spry demos. Which use Spry Data to create rich and interactive pages, and just imagine how you would have done that using traditional techniques and how long it would take you to build that.
http://labs.adobe.com/technologies/spry/demos/ -
Updating a column in the database
Hi,
My requirement is to update a column in the database just before calling
OAException.raiseBundledOAException(getExceptionList());
I am calling the above in a method in the AM
Just before calling this I am Getting the handle of the VO (which is based on an EO, which in turn is based on the table which I want to update) and setting the value of that column as null. But the value is not getting updated at the database level.
When I query the database I can still see the earlier value and not the updated value.
In some cases I can see the value on the screen is updated but the value in the database remains same.
Please let me know how do I update a column in the database and
Thanks
MeenalHi,
I guess, since an excpetion is raised the transaction is not commited automatically by the framework.
Try commiting the transaction explicity after the update and before the exception. But i dont think, its a good design... anybody any idea ?! -
Error updating the dataset!!
Hi,
Currently our endusers are working with SAP Lumira, automatic
update to 1.15.1, Build 879 was made; but what we can
see is that the
stories that were already generated with a version 1.14 and they try to
refresh this stories, SAP Lumira send a error message:
Error updating the dataset!!
No more details..
Steps to Reproduce
1. Open SAP Lumira
2. Open Story
3. Refresh Story with the same excel dataset.
4. Error
SAP Lumira 1.15.1 Build 879
Windows 7 Home Basic
Microsof Excel 2010Hi Tammy
The SAP nota doesn't apply..
Cause
The cause seemed to be something to do with a set of about 20,000 cells in a single column of the Excel sheet. --> We use just about 150 cells
The exact item that is causing the issue in the Excel file file is not found. --> The file is in the same route.
Resolution
Two solutions were found for this issue:
The column that was causing issues was a date column and the # symbol was used for null values. Replacing the # symbol with a blank in the data got this to work. --> Not symbol # on the file
2. Opening the Excel file up with another office tool like Libre Office and exporting it as a new Excel file fixes what ever is in the file causing the error. --> Still the same error
Maybe you are looking for
-
.ics file not opening in iCal and entourgae
Hi, this is the format of my .ics file, When i open it it gives the error 'iCal can't read this calendar file. No events saves to the calendar' This is the firs time i am trying to open the calendar attachment sent via an email. Its not opening in en
-
Why are pdf file attachment in emails converting to wordpad and I can open them?
when I use webmail on my laptop why are attachments to email converting to wordpad and I can't open them. == This happened == Every time Firefox opened == unknown
-
I walked into a dark room with my iPad and tripped on something. I then dropped the iPad on my bed. When I fell my elbow landed right in the middle of the screen of the iPad. I have a 30 dollar case. Now half the screen is black. What should I do? He
-
I have one table name Route(RouteID NUMBER Primary Key,RouteName VARCHAR2) I have one more Table Destination(DestId NUMBER f reference oreignKey(RouteID)). I am going to Insert Data into the Destination Table Without having an entry In route Table.Th
-
Can we Start Complete Cache Refresh at production
While the production is running , we want to do a "Start Complete Cache Refresh" at SXI_CACHE. Please inform us whether you did it and if so, what is the impact? Thanks!