Regarding HTMLDB_COLLECTION.CREATE_COLLECTION_FROM_QUERY
Hi,
I try to create collection and I use this piece of code:
HTMLDB_COLLECTION.CREATE_COLLECTION_FROM_QUERY(
p_COLLECTION_NAME => 'COMMENTS_COLLECTION',
p_QUERY => 'SELECT
ed.severity "Severity",
rownum "Number",
ed.id
from
external_defect ed
where ed.OBJ_ID=:P156_OBJ_ID
and ed.pro_id = :P156_PRO_ID'
But while running this query I am getting the following error.
" create_collection_from_query Error:ORA-20104: create_collection_from_query ParseErr:ORA-00942: table or view does not exist) "
But In SQLWORKSHOP I am able to run SELECT query for this table.
Please help me regarding the same
Thanks
"user554934",
This may not be completely obvious from the documentation, but I don't believe the bind variables in your query are being processed in create_collection_from_query. Can you try this via the v() function?
As in:
HTMLDB_COLLECTION.CREATE_COLLECTION_FROM_QUERY(
p_COLLECTION_NAME => 'COMMENTS_COLLECTION',
p_QUERY => 'SELECT
ed.severity "Severity",
rownum "Number",
ed.id
from
external_defect ed
where ed.OBJ_ID=v(''P156_OBJ_ID'')
and ed.pro_id = v(''P156_PRO_ID'')'
);Joel
Similar Messages
-
We have recently upgraded a production application from 1.6 to 2.2.1 and have found a problem creating a collection from a query in a before header process. The page no longer displays and we eventually get the standard http request failed message after over 60 seconds. I have traced this session and the process appears to hang after parsing then executing the cursor i.e. does not fetch the results.
The SQL is based on a view that uses connect by prior and runs fine within SQL+ (under 1 second to return 63 rows). Other queries passed to the API appear to work fine whether they are based on tables and/or views. Have also tried changing the reference to apex_collection but still get the same issue.
Have changed the shared_pool_size to 100M as recommended for the upgrade.
Has anyone experienced a similar issue or can recommend a better/alternative method to achieve the following:
if ( htmldb_collection.collection_exists( v_collection_name ) )
then
htmldb_collection.delete_collection( v_collection_name );
else
htmldb_collection.create_collection_from_query
(p_collection_name => v_collection_name ,
p_query => 'select distinct
t.customer_id --1
, t.fiscal_year --2
, t.financial_header_id --3
, t.financial_item_id --4
, t.sequence --5
, t.code --6
, t.description --7
, t.is_title --8
, t.is_item --9
, t.is_negative --10
, t.is_hidden --11
, t.connect_by_isleaf --12
, null this_year_minus_0 --13
, null this_year_minus_1 --14
, null this_year_minus_2 --15
from financial_statements t where ( t.is_hidden = 0 )');
end if;
Thanks, IanDaniel,
That API is meant to be used from within an application, from a page process, for example. So I would expect an error from where you are calling it. (Disregard browser language question.)
When you said:
Our production version are 2.0.0.00.49 because we have a 9.2 database.
...what do you mean? You can use any version of Application Express with 9.2.0.3 or higher database.
Scott -
Question about HTMLDB_COLLECTION.CREATE_COLLECTION_FROM_QUERY
Actually 2 questions:
1) I have a query which normally runs for about 10-20 seconds and returns over a million records. When I use this query with HTMLDB_COLLECTION.CREATE_COLLECTION_FROM_QUERY in a page in on load before header process, the page times out. If a put some restriction in the query and bring the number of records down to about 100 000 it goes through. Is this normal? Shouldn't I be using this technique with large queries?
2) I have successfully used HTMLDB_COLLECTION.CREATE_COLLECTION_FROM_QUERY in a package which is called from Apex page, but when I try to run a script containing HTMLDB_COLLECTION.CREATE_COLLECTION_FROM_QUERY in TOAD in the same schema where my Apex applications are parsed I get the following error:
ORA-20104: create_collection_from_query Error:ORA-20001: Invalid parsing schema for current workspace ID
ORA-06512: at "FLOWS_030100.WWV_FLOW_COLLECTION",
Can somebody shed some light on this please!
GeorgeHi George,
I don't know exactly what the method actually does (a DBA may be able to view the package body code to determine this) but I would have thought that it would at least double the time taken to run the query directly in SQL - once to run the query and the second time to populate the collection. Once created, though, any computations done to get the results for the reports would be a lot slower than using normal SQL data as aggregation and filters could not use indexes that should exist on the tables.
One possibly, which I haven't yet used myself, would be Materialized Views - see: [http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_6002.htm#sthref6657] - though this may depend on the frequency of updates on the underlying tables as this would, of course, mean additional overheads when inserting/updating/deleting data. You can have indexes on these views, so that would help with the reports.
Another possibility would to be to have a scheduled job that analyses the tables on a regular basis and generates snapshots in secondary tables that you could then base your reports on. We actually use this method in our office to do daily snapshots of financial data - these are scheduled to run overnight so that they do not slow down normal operations.
If you wanted to see how long it would take to generate the collection, search the forum for "Processing" and you should find examples of pages that use the Processing graphic (like you see if you install an application into Apex for example). You could apply that to your page and then time it until you get the results
Andy -
HTMLDB_COLLECTION.MERGE_MEMBERS
L.S.,
I created a SQL Query (updateable report) region based on a collection: 'X'.
The collection is created in a page process which is executed On Load - Before Header:
if HTMLDB_COLLECTION.COLLECTION_EXISTS( p_collection_name => 'X' )
then
HTMLDB_COLLECTION.DELETE_COLLECTION( p_collection_name => 'X );
end if;
HTMLDB_COLLECTION.CREATE_COLLECTION_FROM_QUERY(
p_collection_name => 'X',
p_query => 'select * from x' );
To update the collection when someone makes any changes to the collection-member-attributes, I created a page process:
/* collect the original values */
for r in ( select * from htmldb_collections where collection_name = 'X' )
loop
/* compare these values with the 'report-values' */
if nvl(r.c001,'#') <> nvl(htmldb_application.g_f02(i),'#')
then
/* the values is changed, so update the member-attribute */
htmldb_collection.update_member_attribute( 'X',
r,
1,
p_attr_value => htmldb_application.g_f02(i));
end if;
if nvl(r.c002,'#') <> nvl(htmldb_application.g_f03(i),'#')
then
htmldb_collection.update_member_attribute( 'X',
r,
2,
p_attr_value => htmldb_application.g_f03(i));
end if;
If there are 40 attributes then this way it is a lot of programming.
Can someone 'tell me more' about HTMLDB_COLLECTION.MERGE_MEMBERS?
How exactly (which arguments etc.) does one call this procedure?
Has someone used it before? Is there an example?
Thanks,
JosJos,
Here is an example of calling the merge_members procedure. I didn't test it though, see if it makes sense to you:declare
l_f01 wwv_flow_global.vc_arr2;
l_c001 wwv_flow_global.vc_arr2;
l_c002 wwv_flow_global.vc_arr2;
begin
-- existing collection contains c001 and c002 values for elements 1,2,3,4,5
l_f01(1) := '1'; -- keep elements at index position 1
l_f01(2) := '3'; -- keep elements at index position 3
l_f01(3) := '5'; -- keep elements at index position 5
l_f01(4) := '6'; -- insert elements at index position 6
-- collection elements at index positions 2 and 4 will be deleted
-- set elements to be updated
l_c001(1) := 'upd-attr1';
l_c002(1) := 'upd-attr2';
l_c001(3) := 'upd-attr1';
l_c002(3) := 'upd-attr2';
l_c001(5) := 'upd-attr1';
l_c002(5) := 'upd-attr2';
-- set new elements to insert
l_c001(6) := 'ins-attr1';
l_c002(6) := 'ins-attr2';
htmldb_collection.merge_members (
p_collection_name =>'TEST_COLL',
p_seq l_f01,
p_c001 l_c001,
p_c002 l_c002);
end;Scott -
I have a form which uses Get to post fields to another page
which list real esate results.
the form works fine it searches
Area
Property Type:
Min Price: etc by pasing the form fields where I have an sql
query that processes the results.
The client wants to add an extra filed for searching a
reference number field in the smae form which if entered will mean
the search only returns the reference number and nothing else.
How do I do this in DW
this is my exsisting sql code for the search of the existing
fileds
SELECT *
FROM Proeprt
WHERE Price BETWEEN Varmax AND VarMin AND Areaname LIKE
Varareasort AND types.PropertyType LIKE VarTypes AND Bedrooms LIKE
VarBeds
Regards
Andrew NewtonMy first name is Madhu. The content of the last post changed but the problem did'nt change.
I don't know if this would make any difference, but one thing I wanted to mention is that, I make my query in a package in the backend and create 15 collections using htmldb_collection.create_collection_from_query and actually return the query
select c001,c002, c003,c004,c005,c006,c007,c008,c009,c010,c011,c012,c013 from htmldb_collections where collection_name = '''||v_function_name||'''';
Also, I split my 15 regions into 3 different pages and flow runs perfectly irrespective of how many search criteria entered. But of the main requirements of this application is that, all the regions have to be on one page, unless there is a limitation in APEX. -
SQL Query report region that only queries on first load
Hello all,
Is there any way in which you can prevent a SQL Query report region from quering data after every refresh?
I would like to make a report that queries on the first load, but then I would like to change the individual values, and reload to show the change, but every time I reload the page the columns are queried and the original values are displayed once again...
any ideas?
-MuxChet,
I created a header process to create the HTMLDB_COLLECTION. It is something like:
HTMLDB_COLLECTION.CREATE_COLLECTION_FROM_QUERY(
p_collection_name => 'Course_Data',
p_query => 'SELECT DISTINCT COURSE_ID, HTMLDB_ITEM.CHECKBOX(14,COURSE_ID) as "checker", TITLE, SUBJECT, COURSE_NUMB, SECTION, ENROLLED, null as "temp_term", null as "temp_title", null as "temp_crse_id", null as "temp_subj", null as "temp_crse_numb", null as "temp_sect", FROM DB_TBL_A, DB_TBL_B, DB_TBL_C, DB_TBL_D, DB_TBL_E, DB_TBL_F WHERE ...');
The names were changed, for obvious reasons.
I then created an SQL Report Region to see if it would work. The SQL is:
SELECT c001, c002, c003
FROM htmldb_collections
WHERE collection_name = 'COURSE_DATA'
When I run the page it says:
ORA-20104: create_collection_from_query Error:ORA-20104: create_collection_from_query ExecErr:ORA-01008: not all variables bound
Any idea why this is happening?
I'm new to HTMLDB_COLLECTIONS, so I may be doing something wrong
-Mux -
Csv-export with dynamic filename ?
I'm using the report template "export: CSV" to export data to a csv-file. It works fine with a static file name. Now, I would like to have a dynamic file name for the csv-file, e.g. "ExportData_20070209.csv".
Any ideas ?Ran,
This is a procedure, which will export result of any query into a flat file in your directory:
PROCEDURE export_rep_to_bfile (
p_query IN VARCHAR2,
p_directory IN VARCHAR2,
p_filename IN VARCHAR2 DEFAULT 'export_csv.csv',
p_delimiter IN VARCHAR2 DEFAULT ';',
p_coll_name IN VARCHAR2 DEFAULT 'MY_EXP_COLL'
IS
v_file UTL_FILE.file_type;
v_column_list DBMS_SQL.desc_tab;
v_column_cursor PLS_INTEGER;
v_number_of_col PLS_INTEGER;
v_col_names VARCHAR2 (4000);
v_line VARCHAR2 (4000);
v_delimiter VARCHAR2 (100);
v_error VARCHAR2 (4000);
v_code VARCHAR2 (4000);
BEGIN
-- Get the Source Table Columns
v_column_cursor := DBMS_SQL.open_cursor;
DBMS_SQL.parse (v_column_cursor, p_query, DBMS_SQL.native);
DBMS_SQL.describe_columns (v_column_cursor,
v_number_of_col,
v_column_list
DBMS_SQL.close_cursor (v_column_cursor);
FOR i IN v_column_list.FIRST .. v_column_list.LAST
LOOP
v_col_names :=
v_col_names || p_delimiter
|| (v_column_list (i).col_name);
END LOOP;
v_col_names := LTRIM (v_col_names, ';');
v_file := UTL_FILE.fopen (p_directory, p_filename, 'W');
UTL_FILE.put_line (v_file, v_col_names);
--for ApEx when used outside of ApEx
IF htmldb_collection.collection_exists (p_collection_name => p_coll_name)
THEN
htmldb_collection.delete_collection
(p_collection_name => p_coll_name);
END IF;
htmldb_collection.create_collection_from_query
(p_collection_name => p_coll_name,
p_query => p_query
COMMIT;
v_number_of_col := 50 - v_number_of_col;
FOR i IN 1 .. v_number_of_col
LOOP
v_delimiter := v_delimiter || p_delimiter;
END LOOP;
FOR c IN (SELECT c001, c002, c003, c004, c005, c006, c007, c008, c009,
c010, c011, c012, c013, c014, c015, c016, c017, c018,
c019, c020, c021, c022, c023, c024, c025, c026, c027,
c028, c029, c030, c031, c032, c033, c034, c035, c036,
c037, c038, c039, c040, c041, c042, c043, c044, c045,
c046, c047, c048, c049, c050
FROM htmldb_collections
WHERE collection_name = p_coll_name)
LOOP
v_line :=
RTRIM ( c.c001
|| p_delimiter
|| c.c002
|| p_delimiter
|| c.c003
|| p_delimiter
|| c.c004
|| p_delimiter
|| c.c005
|| p_delimiter
|| c.c006
|| p_delimiter
|| c.c007
|| p_delimiter
|| c.c008
|| p_delimiter
|| c.c009
|| p_delimiter
|| c.c010
|| p_delimiter
|| c.c011
|| p_delimiter
|| c.c012
|| p_delimiter
|| c.c013
|| p_delimiter
|| c.c014
|| p_delimiter
|| c.c015
|| p_delimiter
|| c.c016
|| p_delimiter
|| c.c017
|| p_delimiter
|| c.c018
|| p_delimiter
|| c.c019
|| p_delimiter
|| c.c020
|| p_delimiter
|| c.c021
|| p_delimiter
|| c.c022
|| p_delimiter
|| c.c023
|| p_delimiter
|| c.c024
|| p_delimiter
|| c.c025
|| p_delimiter
|| c.c026
|| p_delimiter
|| c.c027
|| p_delimiter
|| c.c028
|| p_delimiter
|| c.c029
|| p_delimiter
|| c.c030
|| p_delimiter
|| c.c031
|| p_delimiter
|| c.c032
|| p_delimiter
|| c.c033
|| p_delimiter
|| c.c034
|| p_delimiter
|| c.c035
|| p_delimiter
|| c.c036
|| p_delimiter
|| c.c037
|| p_delimiter
|| c.c038
|| p_delimiter
|| c.c039
|| p_delimiter
|| c.c040
|| p_delimiter
|| c.c041
|| p_delimiter
|| c.c042
|| p_delimiter
|| c.c043
|| p_delimiter
|| c.c044
|| p_delimiter
|| c.c045
|| p_delimiter
|| c.c046
|| p_delimiter
|| c.c047
|| p_delimiter
|| c.c048
|| p_delimiter
|| c.c049
|| p_delimiter
|| c.c050,
v_delimiter
UTL_FILE.put_line (v_file, v_line);
END LOOP;
UTL_FILE.fclose (v_file);
IF htmldb_collection.collection_exists (p_collection_name => p_coll_name)
THEN
htmldb_collection.delete_collection
(p_collection_name => p_coll_name);
END IF;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
IF DBMS_SQL.is_open (v_column_cursor)
THEN
DBMS_SQL.close_cursor (v_column_cursor);
END IF;
v_error := SQLERRM;
v_code := SQLCODE;
raise_application_error (-20001,
'Your query is invalid!'
|| CHR (10)
|| 'SQL_ERROR: '
|| v_error
|| CHR (10)
|| 'SQL_CODE: '
|| v_code
|| CHR (10)
|| 'Please correct and try again.'
|| CHR (10)
END export_rep_to_bfile;You may adjust this to get a report query, by doing the following:
DECLARE
x VARCHAR2 (4000);
BEGIN
SELECT region_source
INTO x
FROM apex_application_page_regions
WHERE region_id = TO_NUMBER (LTRIM (p_region, 'R'))
AND page_id = p_page_id
AND application_id = p_app_id;
END;which will get your region query.
...and there you go.
Denes Kubicek -
Using collections and experiencing slow response
I am experiencing slow response when using htmldb_collection. I was hoping someone might point me in another direction or point to where the delay may be occurring.
First a synopsis of what I am using these collections for. The main collections are used in order to enable the users to work with multiple rows of data (each agreement may have multiple inbound and outbound tiered rates). These collections, OBTCOLLECTION and IBTCOLLECTION, seem to be fine. The problem arises from the next set of collections.
OBTCOLLECTION and IBTCOLLECTION each contain a field for city, product, dial code group and period. Each of these fields contains a semi-colon delimited string. When the user chooses to view either the outbound tiers (OBTCOLLECTION) or the inbound tiers (IBTCOLLECTION), I generate four collections based on these four fields, parsing the delimited strings, for each tier (record in the OBT or IBT collection). Those collections are used as the bases for multiple select shuttles when the user edits an individual tier.
Here is the collection code for what I am doing.
When the user chooses an agreement to work with, by clicking on an edit link, they are sent to page 17 (as you see referenced in the code). That page has the on-demand process below triggered on load, after footer.
-- This process Loads the collections used
-- for the Inbound and Outbound tier details
-- OBTCOLLECTION
-- IBTCOLLECTION
-- It is an on-demand process called on load (after footer) of page 17 --
-- OUTBOUND TIER COLLECTION --
if htmldb_collection.collection_exists( 'OBTCOLLECTION') = TRUE then
htmldb_collection.delete_collection(p_collection_name => 'OBTCOLLECTION' );
end if;
htmldb_collection.create_collection_from_query(
p_collection_name => 'OBTCOLLECTION',
p_query => 'select ID, AGREEMENT_ID, FIXED_MOBILE_ALL,
OF_TYPE,TIER, START_MIN, END_MIN,
REVERT_TO, RATE,CURRENCY,
PENALTY_RATE, PENALTY_CURR,
PRODUCT,CITY, DIAL_CODE_GROUP, PERIOD,
to_char(START_DATE,''MM/DD/YYYY''),
to_char(END_DATE,''MM/DD/YYYY''),
MONTHLY,EXCLUDED,
''O'' original_flag
from outbound_tiers
where agreement_id = '''||:P17_ID ||'''
order by FIXED_MOBILE_ALL, ID',
p_generate_md5 => 'YES');
-- INBOUND TIER COLLECTION --
if htmldb_collection.collection_exists( 'IBTCOLLECTION') = TRUE then
htmldb_collection.delete_collection(p_collection_name => 'IBTCOLLECTION' );
end if;
htmldb_collection.create_collection_from_query(
p_collection_name => 'IBTCOLLECTION',
p_query => 'select ID, AGREEMENT_ID, FIXED_MOBILE_ALL,
OF_TYPE,TIER, START_MIN, END_MIN,
REVERT_TO, RATE,CURRENCY,
PENALTY_RATE, PENALTY_CURR,
PRODUCT,CITY, DIAL_CODE_GROUP, PERIOD,
to_char(START_DATE,''MM/DD/YYYY''),
to_char(END_DATE,''MM/DD/YYYY''),
MONTHLY,EXCLUDED,
''O'' original_flag
from inbound_tiers
where agreement_id = '''||:P17_ID ||'''
order by FIXED_MOBILE_ALL, ID',
p_generate_md5 => 'YES');
commit;The tables each of these collections is created from are each about 2000 rows.
This part is working well enough.
Next, when the user chooses to view the tier information (either inbound or Outbound) they navigate to either of two pages that have the on-demand process below triggered on load, after header.
-- This process Loads all of the collections used
-- for the multiple select shuttles --
-- DCGCOLLECTION
-- CITYCOLLECTION
-- PRODCOLLECTION
-- PRDCOLLECTION
-- It is an on-demand process called on load (after footer) --
DECLARE
dcg_string long;
dcg varchar2(100);
city_string long;
the_city varchar2(100);
prod_string long;
prod varchar2(100);
prd_string long;
prd varchar2(100);
end_char varchar2(1);
n number;
CURSOR shuttle_cur IS
SELECT seq_id obt_seq_id,
c013 product,
c014 city,
c015 dial_code_group,
c016 period
FROM htmldb_collections
WHERE collection_name = 'OBTCOLLECTION';
shuttle_rec shuttle_cur%ROWTYPE;
BEGIN
-- CREATE OR TRUNCATE DIAL CODE GROUP COLLECTION FOR MULTIPLE SELECT SHUTTLES --
htmldb_collection.create_or_truncate_collection(
p_collection_name => 'DCGCOLLECTION');
-- CREATE OR TRUNCATE CITY COLLECTION FOR MULTIPLE SELECT SHUTTLES --
htmldb_collection.create_or_truncate_collection(
p_collection_name => 'CITYCOLLECTION');
-- CREATE OR TRUNCATE PRODUCT COLLECTION FOR MULTIPLE SELECT SHUTTLES --
htmldb_collection.create_or_truncate_collection(
p_collection_name => 'PRODCOLLECTION');
-- CREATE OR TRUNCATE PERIOD COLLECTION FOR MULTIPLE SELECT SHUTTLES --
htmldb_collection.create_or_truncate_collection(
p_collection_name => 'PRDCOLLECTION');
-- LOAD COLLECTIONS BY LOOPING THROUGH CURSOR.
OPEN shuttle_cur;
LOOP
FETCH shuttle_cur INTO shuttle_rec;
EXIT WHEN shuttle_cur%NOTFOUND;
-- DIAL CODE GROUP --
dcg_string := shuttle_rec.dial_code_group ;
end_char := substr(dcg_string,-1,1);
if end_char != ';' then
dcg_string := dcg_string || ';' ;
end if;
LOOP
EXIT WHEN dcg_string is null;
n := instr(dcg_string,';');
dcg := ltrim( rtrim( substr( dcg_string, 1, n-1 ) ) );
dcg_string := substr( dcg_string, n+1 );
if length(dcg) > 1 then
htmldb_collection.add_member(
p_collection_name => 'DCGCOLLECTION',
p_c001 => shuttle_rec.obt_seq_id,
p_c002 => dcg,
p_generate_md5 => 'NO');
end if;
END LOOP;
-- CITY --
city_string := shuttle_rec.city ;
end_char := substr(city_string,-1,1);
if end_char != ';' then
city_string := city_string || ';' ;
end if;
LOOP
EXIT WHEN city_string is null;
n := instr(city_string,';');
the_city := ltrim( rtrim( substr( city_string, 1, n-1 ) ) );
city_string := substr( city_string, n+1 );
if length(the_city) > 1 then
htmldb_collection.add_member(
p_collection_name => 'CITYCOLLECTION',
p_c001 => shuttle_rec.obt_seq_id,
p_c002 => the_city,
p_generate_md5 => 'NO');
end if;
END LOOP;
-- PRODUCT --
prod_string := shuttle_rec.product ;
end_char := substr(prod_string,-1,1);
if end_char != ';' then
prod_string := prod_string || ';' ;
end if;
LOOP
EXIT WHEN prod_string is null;
n := instr(prod_string,';');
prod := ltrim( rtrim( substr( prod_string, 1, n-1 ) ) );
prod_string := substr( prod_string, n+1 );
if length(prod) > 1 then
htmldb_collection.add_member(
p_collection_name => 'PRODCOLLECTION',
p_c001 => shuttle_rec.obt_seq_id,
p_c002 => prod,
p_generate_md5 => 'NO');
end if;
END LOOP;
-- PERIOD --
prd_string := shuttle_rec.period ;
end_char := substr(prd_string,-1,1);
if end_char != ';' then
prd_string := prd_string || ';' ;
end if;
LOOP
EXIT WHEN prd_string is null;
n := instr(prd_string,';');
prd := ltrim( rtrim( substr( prd_string, 1, n-1 ) ) );
prd_string := substr( prd_string, n+1 );
if length(prd) > 1 then
htmldb_collection.add_member(
p_collection_name => 'PRDCOLLECTION',
p_c001 => shuttle_rec.obt_seq_id,
p_c002 => prd,
p_generate_md5 => 'NO');
end if;
END LOOP;
END LOOP;
CLOSE shuttle_cur;
commit;
END;Creating these collections from the initial collection is taking way too long. The page is rendered after about 22 seconds (when tier collection has 2 rows) and 10 minutes worst case (when the tier collection has 56 rows).
Thank you in advance for any advice you may have.Try to instrument/profile your code by putting timing statements after each operation. This way you can tell which parts are taking the most time and address them.
-
Hi!
Does anybody know how to save a report in a BLOB column of my DB?
It has to be done within a process.
Thanks.This procedure will do that for you:
PROCEDURE export_rep_to_blob (
p_query IN VARCHAR2,
p_filename IN VARCHAR2 DEFAULT 'export_csv.csv',
p_delimiter IN VARCHAR2 DEFAULT ';',
p_coll_name IN VARCHAR2 DEFAULT 'MY_EXP_COLL'
IS
v_column_list DBMS_SQL.desc_tab;
v_column_cursor PLS_INTEGER;
v_number_of_col PLS_INTEGER;
v_col_names VARCHAR2 (4000);
v_line VARCHAR2 (4000);
v_delimiter VARCHAR2 (100);
v_dest_lob BLOB;
v_id INTEGER;
v_mime_type VARCHAR2 (100) DEFAULT 'application/vnd.ms-excel';
v_unique_id NUMBER;
v_error VARCHAR2 (4000);
v_code VARCHAR2 (4000);
BEGIN
-- Get the Source Table Columns
v_column_cursor := DBMS_SQL.open_cursor;
DBMS_SQL.parse (v_column_cursor, p_query, DBMS_SQL.native);
DBMS_SQL.describe_columns (v_column_cursor,
v_number_of_col,
v_column_list
DBMS_SQL.close_cursor (v_column_cursor);
FOR i IN v_column_list.FIRST .. v_column_list.LAST
LOOP
v_col_names :=
v_col_names || p_delimiter
|| (v_column_list (i).col_name);
END LOOP;
v_col_names := LTRIM (v_col_names, ';');
-- get unique sequence
SELECT file_util.file_id_sequence
INTO v_unique_id
FROM DUAL;
-- create an empty blob
INSERT INTO wwv_flow_files
(NAME, blob_content,
filename, mime_type, last_updated, content_type
VALUES (v_unique_id || '/' || p_filename, EMPTY_BLOB (),
p_filename, v_mime_type, SYSDATE, 'BLOB'
RETURN ID
INTO v_id;
-- select blob for update
SELECT blob_content
INTO v_dest_lob
FROM wwv_flow_files
WHERE ID = v_id
FOR UPDATE;
DBMS_LOB.writeappend (v_dest_lob,
LENGTH (v_col_names),
UTL_RAW.cast_to_raw (v_col_names)
DBMS_LOB.writeappend (v_dest_lob,
LENGTH (CHR (10)),
UTL_RAW.cast_to_raw (CHR (10))
IF htmldb_collection.collection_exists (p_collection_name => p_coll_name)
THEN
htmldb_collection.delete_collection
(p_collection_name => p_coll_name);
END IF;
htmldb_collection.create_collection_from_query
(p_collection_name => p_coll_name,
p_query => p_query
v_number_of_col := 50 - v_number_of_col;
FOR i IN 1 .. v_number_of_col
LOOP
v_delimiter := v_delimiter || p_delimiter;
END LOOP;
FOR c IN (SELECT c001, c002, c003, c004, c005, c006, c007, c008, c009,
c010, c011, c012, c013, c014, c015, c016, c017, c018,
c019, c020, c021, c022, c023, c024, c025, c026, c027,
c028, c029, c030, c031, c032, c033, c034, c035, c036,
c037, c038, c039, c040, c041, c042, c043, c044, c045,
c046, c047, c048, c049, c050
FROM htmldb_collections
WHERE collection_name = p_coll_name)
LOOP
v_line :=
RTRIM ( c.c001
|| p_delimiter
|| c.c002
|| p_delimiter
|| c.c003
|| p_delimiter
|| c.c004
|| p_delimiter
|| c.c005
|| p_delimiter
|| c.c006
|| p_delimiter
|| c.c007
|| p_delimiter
|| c.c008
|| p_delimiter
|| c.c009
|| p_delimiter
|| c.c010
|| p_delimiter
|| c.c011
|| p_delimiter
|| c.c012
|| p_delimiter
|| c.c013
|| p_delimiter
|| c.c014
|| p_delimiter
|| c.c015
|| p_delimiter
|| c.c016
|| p_delimiter
|| c.c017
|| p_delimiter
|| c.c018
|| p_delimiter
|| c.c019
|| p_delimiter
|| c.c020
|| p_delimiter
|| c.c021
|| p_delimiter
|| c.c022
|| p_delimiter
|| c.c023
|| p_delimiter
|| c.c024
|| p_delimiter
|| c.c025
|| p_delimiter
|| c.c026
|| p_delimiter
|| c.c027
|| p_delimiter
|| c.c028
|| p_delimiter
|| c.c029
|| p_delimiter
|| c.c030
|| p_delimiter
|| c.c031
|| p_delimiter
|| c.c032
|| p_delimiter
|| c.c033
|| p_delimiter
|| c.c034
|| p_delimiter
|| c.c035
|| p_delimiter
|| c.c036
|| p_delimiter
|| c.c037
|| p_delimiter
|| c.c038
|| p_delimiter
|| c.c039
|| p_delimiter
|| c.c040
|| p_delimiter
|| c.c041
|| p_delimiter
|| c.c042
|| p_delimiter
|| c.c043
|| p_delimiter
|| c.c044
|| p_delimiter
|| c.c045
|| p_delimiter
|| c.c046
|| p_delimiter
|| c.c047
|| p_delimiter
|| c.c048
|| p_delimiter
|| c.c049
|| p_delimiter
|| c.c050,
v_delimiter
DBMS_LOB.writeappend (v_dest_lob,
LENGTH (v_line),
UTL_RAW.cast_to_raw (v_line)
DBMS_LOB.writeappend (v_dest_lob,
LENGTH (CHR (10)),
UTL_RAW.cast_to_raw (CHR (10))
END LOOP;
-- update the new line
UPDATE wwv_flow_files
SET doc_size = DBMS_LOB.getlength (blob_content),
dad_charset = 'ascii'
WHERE ID = v_id;
-- import into custom table
INSERT INTO syn_file_table
SELECT NAME, 'export query flat file', ID, blob_content, mime_type,
filename, created_on
FROM wwv_flow_files
WHERE ID = v_id;
-- delete the file from wwv_flow_files
DELETE FROM wwv_flow_files
WHERE ID = v_id;
IF htmldb_collection.collection_exists (p_collection_name => p_coll_name)
THEN
htmldb_collection.delete_collection
(p_collection_name => p_coll_name);
END IF;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
v_error := SQLERRM;
v_code := SQLCODE;
IF DBMS_SQL.is_open (v_column_cursor)
THEN
DBMS_SQL.close_cursor (v_column_cursor);
END IF;
raise_application_error (-20001,
'Your query is invalid!'
|| CHR (10)
|| 'SQL_ERROR: '
|| v_error
|| CHR (10)
|| 'SQL_CODE: '
|| v_code
|| CHR (10)
|| 'Please correct and try again.'
|| CHR (10)
END export_rep_to_blob;You will need to adjust the table names according to your setup.
Denes Kubicek
http://deneskubicek.blogspot.com/
http://www.opal-consulting.de/training
http://htmldb.oracle.com/pls/otn/f?p=31517:1
------------------------------------------------------------------- -
Regarding sy-index and sy-tabix
Hi,
What is the major difference between sy-index and sy-tabix ,
can you give me one good example with code..
Regards,
Reddy.Hi,
SY-TABIX - Current line of an internal table. SY-TABIX is set by the statements below, but only for index tables. The field is either not set or is set to 0 for hashed tables.
APPEND sets SY-TABIX to the index of the last line of the table, that is, it contains the overall number of entries in the table.
COLLECT sets SY-TABIX to the index of the existing or inserted line in the table. If the table has the type HASHED TABLE, SY-TABIX is set to 0.
LOOP AT sets SY-TABIX to the index of the current line at the beginning of each loop lass. At the end of the loop, SY-TABIX is reset to the value that it had before entering the loop. It is set to 0 if the table has the type HASHED TABLE.
READ TABLE sets SY-TABIX to the index of the table line read. If you use a binary search, and the system does not find a line, SY-TABIX contains the total number of lines, or one more than the total number of lines. SY-INDEX is undefined if a linear search fails to return an entry.
SEARCH FOR sets SY-TABIX to the index of the table line in which the search string is found.
SY_INDEX - In a DO or WHILE loop, SY-INDEX contains the number of loop passes including the current pass.
sy-tabix is the tab index - the index of the record in the internal table you are accessing,
sy-index is the loop counter.
If you use a condition in LOOP the sy-index will go from 1 to n, but sy-tabix will refer to the line in the internal table.
Hope this helps u.
Thanks,
Ruthra -
Regarding Field Missing in Dso Transformation
Hi
Folks
I am facing the issue like
In Datasouce to DSO transformation i can see the 55 objects in DSO table, and in DSO to Cube Transformation i can see 54 fields in DSO table, one field is missing , the object 0TXTSH(short discription) is mapped to field 0TXZ01in DS to DSO transformation.
so How can i get the field in DSO to Cube transformation.??
any settings have to be change???
waiting for yours Valuable answers
Regards
AnandHi,
Please identify the object and check it whether it is an attribute or a characteristic, if it is attribute only disable that option then check it.
Regards,
Srinivas -
I am having an issue regarding a placed order via customer service department
I recently relocated to Anchorage Alaska as part of a permanent change of station per the United States Air Force. I was initially located on the East Coast in the lower 48 and at the time of activating my contract I had purchased two separate Iphone 4 devices. I also recently went in to a store in February to purchase a Nexus 7 as well.
Upon arrival in Anchorage I had multiple issues regarding the Iphone 4 devices including being unable to send and receive text messages & imessages, unable to make phone calls, dropped phone calls, unable to utilize GPS, as well as not being able to access general account information and use anything related to web browsing or data usage. It was determined that because the Iphone 4 operates on the 3g network and Verizon does not have a 3g network in Alaska, as a result I was utilizing an extended service network from another carrier. As a result of this I am only able to use my Iphone 4 devices while connected to my wi-fi network while within my home, which is totally unacceptable.
I was not made aware that I would be dealing with this when I moved to Alaska and inquired as the the use of the devices I currently owned prior to purchasing the tablet. I was assured by three separate store employees one of which being a manager that all devices would function at 100% efficiency including the Iphone 4s. In fact I was recently billed 350$ for roaming charges last month, which prompted me to speak with a representative via the online chat regarding the significant increase she said that she was unable to process any sort of credit to the account regardless of what I had been told at a local Verizon store where I purchased the tablet.
As a result of all of these mishaps since arriving here in Alaska I determined I was in need of newer devices that utilize the 4G LTE network currently provided by Verizon in Alaska. I know for a fact that the 4G LTE works great up here because my Nexus 7 tablet runs flawlessly and does not incur roaming charges when utilizing the 4G LTE network.
Yesterday I attempted to contact Verizon through the live chat feature regarding upgrading two of the devices on my account. The live chat representative immediately asked me when my upgrade date was. Upon telling her my upgrade date 9/29/2014 she told me I should contact the customer service department as I might be eligible for an early upgrade. I then proceeded to contact the customer service department using my Iphone 4.
My attempt to speak to anyone in the customer service department resulted in a merry-go-round of being put on hold 6 separate times by two different employees, both of which had me wait for more than an hour while they attempted to speak to a manager to gain approval for an early upgrade. The first rep seemed almost sure she would be able to have my devices upgraded early especially considering the issues I was having regarding service.
The second rep seemed newer and was very dodgy about my questions and was very unwilling to help at first. He even mentioned that I had been a Verizon customer for almost two years, had never missed a single payment and had outstanding account history which should have garnered some sort of importance to the level of my request. But I digress, during this time I was disconnected from the call twice from each separate representative.
Both reps assured me they would call me back, I never did get a call back from either one of those reps and I was becoming very frustrated having waited four hours trying to find some sort of solution to my current predicament.
After waiting an hour for the second representative to call back I grew impatient and contacted the customer service department, was put on hold again, and finally reached a third customer service representative who was able to provide a solution for me.
I explained everything I had been dealing with to Cory ID # V0PAC61, both regarding the phones, the issue of the level of service I was receiving, the dire need for working devices and the multiple times I had been disconnected. I explained to him as a result of these issues I was certainly considering switching to a different provider, a local provider even who could provide me the adequate service that I require for my mobile devices.
I explained to Cory that I had been with Verizon for almost two years, and I had been on a relatives account prior to owning my own Verizon account and had never received this kind of treatment when trying to work towards a simple solution. Cory proceeded to tell me he needed to put me on hold to see if there was anything that could be done regarding the upgrades of the device considering all of the trouble I had been dealing with.
After Cory reconnected with me in the phone call he was able to successfully reach a solution by allowing me to upgrade my devices. We conversed about the options available and I eventually decided to upgrade both Iphone 4 devices to Moto X devices as we determined those would be sufficient for my needs while in Alaska. I also proceeded to add two Otter Box Defender cases to the order so that the devices would have sufficient protection. Cory inquired as to whether or not I would like to purchase insurance for the phones as well and I opted for the $5.00 monthly insurance which including damage and water protection.
Cory explained to me the grand total for the devices which included an activation fee of $35.00 for each device, $49.99 for each Otter Box case, and an additional $50.00 for each device which would be refunded as a rebate upon receipt of the devices and activation, a rebate that I would be required to submit. Cory explained to me that the devices would most likely arrive Tuesday of 6/17 and no later than Wednesday 6/18.
Cory took my shipping information and told me everything was all set and the only thing left to do was to transfer me to the automated service so that I could accept the 2 year agreement for both devices. I thanked him very much, took his name and ID# so that I might leave positive feedback about his exemplary customer service and was then transferred to the automated service.
Once transferred to the automated service I was then prompted to enter both telephone numbers for the devices that would be upgraded, I was then required to accept the new 2 year agreement for both devices and after doing so I was required to end the call. I did so in an orderly fashion and expected a confirmation # to arrive in my email regarding the placed order.
I have never received a confirmation email. I decided to sleep on it and assumed a confirmation email would be sent sometime tomorrow. Nothing has since been received however. I woke up early this morning around 6AM Alaska time to speak to another live chat representative, Bryan, in the billing department who assured me the order was currently processing and verified the order #. I asked him whether or not it was typical for a customer to not receive a confirmation email for an order placed and he said it can sometimes take up to 2-3 business days. He then stated that he had taken note of the issues I was experiencing and told me he would transfer me to the sales department as they would be able to provide more information regarding the shipment of both devices and a confirmation email, as he stated he did not want me to have to wait any longer than necessary to receive said devices.
I was then transferred to Devon in the sales department via the live chat service where I was then required to repeat everything I had said to both Bryan and the other representatives I had spoken too. After a lengthy discussion and repeating everything I have just wrote he told me the order was indeed processing and that he would send a confirmation email in the next 30 minutes.
That was 2 hours ago. It is now 8am Alaska time and I still have not received a confirmation email regarding my order. I was sent an email by Verizon an hour ago stating I had a device to "discover". The email contained no information regarding the shipment of my device, the order confirmation number, or anything regarding my account. The email I received was a typical spam email asking an individual to check out the current available phones and sign up for a new contract.
All I want is a confirmation email to assure that the devices are being sent. I need my phone for work and to communicate with my family in the lower 48. I desperately need to make sure that the device is in fact being sent to the proper address, this is why a confirmation email of the order is so important. I do not care about the shipping speed I just want what I ask to be taken care of for a change. I would hate to sit here unable to determine what the status of my devices are only for the order to be stuck in "processing" limbo and be unable to receive the devices when I was told they would be sent.
I feel I have been given the run around treatment way more than is typically given with any company when an individual is trying to work towards a solution. I have been patient and cordial with everyone I have spoken with, I have not raised my voice or shown stress or anger towards the situation I have only tried my best to work towards a solution with anyone I have spoken too but I am becoming increasingly frustrated with this situation.
Any help regarding this matter would be greatly appreciated. This situation has left a sour taste in my mouth and if the devices were indeed not actually processed in an order, or they were not shipped correctly, or in fact if the order had never existed at all it will only deter me from keeping my Verizon account active and affect my decision to switch to another provider.Hello APVzW, we absolutely want the best path to resolution. My apologies for multiple attempts of replacing the device. We'd like to verify the order information and see if we can locate the tracking number. Please send a direct message with the order number so we can dive deeper. Here's steps to send a direct message: http://vz.to/1b8XnPy We look forward to hearing from you soon.
WiltonA_VZW
VZW Support
Follow us on twitter @VZWSupport -
Vendor Line item with Opening and Closing Balances report regarding
Dear All,
I need a report for vendor line items with Opening and Closing balances.
Thanks in advance
SateeshHi
Try S_ALR_87012082 - Vendor Balances in Local Currency
Regards
Sanil Bhandari -
Regarding training and event management queries
hi experts,
in my company we have ess in which training and event management module is working fine.i need to develop a report in which training booked against employee through tc-psv1 means in sap-r/3 and through ess means tc-pv8i will come.means saggregation for sap r/3 and ess will come.
please help me regarding this.
how will i identifie that training has been booked against employees through sap r/3 or ess on what paramenet we will identifie.
plz help me....
is there any function module;....solved by own
-
Regarding Exporting and Importing internal table
Hello Experts,
I have two programs:
1) Main program: It create batch jobs through open_job,submit and close job.Giving sub program as SUBMIT.
I am using Export IT to memory id 'MID' to export internal table data to sap memory in the subprogram.
The data will be processed in the subprogram and exporting data to sap memory.I need this data in the main program(And using import to get the data,but it is not working).
Importing IT1 from memory id 'MID' to import the table data in the main program after completing the job(SUBMIT SUBPROGRAM AND RETURN).
Importing is not getting data to internal table.
Can you please suggest something to solve this issue.
Thank you.
Regards,
Anand.Hi,
This is the code i am using.
DO g_f_packets TIMES.
* Start Immediately
IF NOT p_imm IS INITIAL .
g_flg_start = 'X'.
ENDIF.
g_f_jobname = 'KZDO_INHERIT'.
g_f_jobno = g_f_jobno + '001'.
CONCATENATE g_f_jobname g_f_strtdate g_f_jobno INTO g_f_jobname
SEPARATED BY '_'.
CONDENSE g_f_jobname NO-GAPS.
p_psize1 = p_psize1 + p_psize.
p_psize2 = p_psize1 - p_psize + 1.
IF p_psize2 IS INITIAL.
p_psize2 = 1.
ENDIF.
g_f_spname = 'MID'.
g_f_spid = g_f_spid + '001'.
CONDENSE g_f_spid NO-GAPS.
CONCATENATE g_f_spname g_f_spid INTO g_f_spname.
CONDENSE g_f_spname NO-GAPS.
* ... (1) Job creating...
CALL FUNCTION 'JOB_OPEN'
EXPORTING
jobname = g_f_jobname
IMPORTING
jobcount = g_f_jobcount
EXCEPTIONS
cant_create_job = 1
invalid_job_data = 2
jobname_missing = 3
OTHERS = 4.
IF sy-subrc <> 0.
MESSAGE e469(9j) WITH g_f_jobname.
ENDIF.
* (2)Report start under job name
SUBMIT (g_c_prog_kzdo)
WITH p_lgreg EQ p_lgreg
WITH s_grvsy IN s_grvsy
WITH s_prvsy IN s_prvsy
WITH s_prdat IN s_prdat
WITH s_datab IN s_datab
WITH p1 EQ p1
WITH p3 EQ p3
WITH p4 EQ p4
WITH p_mailid EQ g_f_mailid
WITH p_psize EQ p_psize
WITH p_psize1 EQ p_psize1
WITH p_psize2 EQ p_psize2
WITH spid EQ g_f_spid
TO SAP-SPOOL WITHOUT SPOOL DYNPRO
VIA JOB g_f_jobname NUMBER g_f_jobcount AND RETURN.
*(3)Job closed when starts Immediately
IF NOT p_imm IS INITIAL.
IF sy-index LE g_f_nojob.
CALL FUNCTION 'JOB_CLOSE'
EXPORTING
jobcount = g_f_jobcount
jobname = g_f_jobname
strtimmed = g_flg_start
EXCEPTIONS
cant_start_immediate = 1
invalid_startdate = 2
jobname_missing = 3
job_close_failed = 4
job_nosteps = 5
job_notex = 6
lock_failed = 7
OTHERS = 8.
gs_jobsts-jobcount = g_f_jobcount.
gs_jobsts-jobname = g_f_jobname.
gs_jobsts-spname = g_f_spname.
APPEND gs_jobsts to gt_jobsts.
ELSEIF sy-index GT g_f_nojob.
CLEAR g_f_flg.
DO. " Wiating untill any job completion
LOOP AT gt_jobsts into gs_jobsts.
CLEAR g_f_status.
CALL FUNCTION 'BP_JOB_STATUS_GET'
EXPORTING
JOBCOUNT = gs_jobsts-jobcount
JOBNAME = gs_jobsts-jobname
IMPORTING
STATUS = g_f_status
* HAS_CHILD =
* EXCEPTIONS
* JOB_DOESNT_EXIST = 1
* UNKNOWN_ERROR = 2
* PARENT_CHILD_INCONSISTENCY = 3
* OTHERS = 4
g_f_mid = gs_jobsts-spname.
IF g_f_status = 'F'.
IMPORT gt_final FROM MEMORY ID g_f_mid .
FREE MEMORY ID gs_jobsts-spname.
APPEND LINES OF gt_final to gt_final1.
REFRESH gt_prlist.
CALL FUNCTION 'JOB_CLOSE'
EXPORTING
jobcount = g_f_jobcount
jobname = g_f_jobname
strtimmed = g_flg_start
EXCEPTIONS
cant_start_immediate = 1
invalid_startdate = 2
jobname_missing = 3
job_close_failed = 4
job_nosteps = 5
job_notex = 6
lock_failed = 7
OTHERS = 8.
IF sy-subrc = 0.
g_f_flg = 'X'.
gs_jobsts1-jobcount = g_f_jobcount.
gs_jobsts1-jobname = g_f_jobname.
gs_jobsts1-spname = g_f_spname.
APPEND gs_jobsts1 TO gt_jobsts.
DELETE TABLE gt_jobsts FROM gs_jobsts.
EXIT.
ENDIF.
ENDIF.
ENDLOOP.
IF g_f_flg = 'X'.
CLEAR g_f_flg.
EXIT.
ENDIF.
ENDDO.
ENDIF.
ENDIF.
IF sy-subrc <> 0.
MESSAGE e539(scpr) WITH g_f_jobname.
ENDIF.
COMMIT WORK .
ENDDO.
Maybe you are looking for
-
Case flaw - Very Disappointed by Apple Technicians at Apple Retail Store
IIn the last month I have purchased approximately $12,000 worth of Apple computers all with AppleCare. Among them is an Apple MacBook Pro 15" Glossy (S/N: RM8193*** <Edited by Moderator>), which is the subject of this posting. When it arrived I felt
-
Where do deleted bookmarks go?
I deleted a bunch of bookmarks but wanted them back later. But once they are gone they are gone correct? you can't go into trash and get them, right?
-
Disk Utility's "Verify Disk" does not work
For some reason I am not able to run the "Verify Disk" option in the Disk Utility application. I open up Disk Utility, I can click on the button, and it begins. However, while it says it is verifying the disk, and the status bar appears (just shows i
-
8520 Update problems - going round in circles!
Hi everyone! I am hoping someone can help. I've been trying to sync my BB Curve 8520 with my laptop for the last few days and nothing is happening. I have update the desktop manager to the newest version, I plug in my phone and still nothing, the des
-
Can not "Auto" "Sync" "Podcasts"
Can not automatically sync my podcasts. Got new/replacement iphone 3GS phone recently. Set up as NEW and reloaded everything. However, my podcasts wont sync. Ihave to manually "Drag and drop".