Raw high_value in the all_tab_columns table
There are some columns defined as float(126) in my database. I need to find out the highest values held in these columns. I am using the high_value column (defined as raw) in the all_tab_columns table. Is there a way to convert raw hig_value to number/decimal?
oracle version - 8i
Please read about [url http://download-west.oracle.com/docs/cd/A87860_01/doc/appdev.817/a76936/utl_raw2.htm]UTL_RAW in the manual
Regards,
Rob.
Similar Messages
-
How to delete the database table
hi all
i want to delete the data from the custrmize table so if u can heilp me to do that....not the internal table ..i write the code but that using that code i can delete only one raw.. but i want to delete all raw in my table..
ZLAB_SUBMIT-SHADE = '1'.
ZLAB_SUBMIT-SUBMIT = '1'.
ZLAB_SUBMIT-RM = 'RINGS'.
ZLAB_SUBMIT-REMARKS = 'Approved'.
ZLAB_SUBMIT-AP = ''.
ZLAB_SUBMIT-USNAM = 'INDIKAF'.
ZLAB_SUBMIT-CPUDT = '05.10.2006'.
ZLAB_SUBMIT-CDATE = '05.10.2006'.
ZLAB_SUBMIT-CPUTM = '21:29:19'.
DELETE ZLAB_SUBMIT.
regard
nawa
this is my code....Hi,
data itab type standard table of zlab_submit.
select * from zlab_summit into table itab.
delete zlab_summit from table itab.
Kindly reward points by clicking the star on the left of reply,if it helps. -
Error when i fetch the external table in oracle 9i ?
External table is created.
But, when i select the external table , it is thrwing the following error.
I have given READ and WRITE permission to the oracle directory.
And, i having the flat file with comma delimited data.
SQL> create table mohan_ext (
2 EMPNO NUMBER(5) ,
3 JOB VARCHAR2(15),
4 SALARY NUMBER(8,2),
5 MGR NUMBER(5) ,
6 HIREDATE DATE,
7 DEPTNO NUMBER(5)
8 )
9 organization external
10 (type oracle_loader
11 default directory ext_dir
12 access parameters (records delimited by newline
13 fields terminated by ','
14 missing field values are null
15 (
16 EMPNO NUMBER(5:5) ,
17 JOB VARCHAR2(15:15),
18 SALARY NUMBER(8,2:8,2),
19 MGR NUMBER(5:5) ,
20 HIREDATE DATE,
21 DEPTNO NUMBER(5:5)
22 )
23 )
24 LOCATION('flat.txt'));
Table created.
SQL> select * from mohan_ext;
select * from mohan_ext
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "identifier": expecting one of: "comma, char, date, defaultif,
decimal, double, float, integer, (, nullif, oracle_date, oracle_number, position, raw, recnum, ),
unsigned, varrawc, varchar, varraw, varcharc, zoned"
KUP-01008: the bad identifier was: NUMBER
KUP-01007: at line 5 column 11
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1
SQL>You may need to scrub some of the data prior to using it as an external table. For instance, ensure that you do not have any extra commas lingering around somewhere within the data as this could cause mapping issues with the data. I've used this process hundreds of times, and more often than not there is an extra comma somewhere that is causing the issue.
-
XMLIndex: finding indexed XPaths and the number of rows in the path table
Hi,
I am storing non-schema-based binary XMLs in an XMLType column in 11g (11.1.0.6.0) and would like to index the XMLs either partially or fully using XMLIndex. I'm expecting to have a large number (tens of millions) of XML documents and have some concerns about the size of the XMLIndex path table.
In short, I am worried that the path table might grow unmanageable large. In order to avoid this and to plan for table partitioning, I would like to create a report of all indexed XPaths in an XMLIndex and to find out how many times each path is actualized in the path table. I would do this for a representative XML sample.
I have been creating XMLIndexes with different exclude/include paths, gathering stats with DBMS_STATS (estimate_percent = 100) and selecting the number of rows in the path table through USER_TABLES.
If anyone knows a more straightforward way of doing this all advice is very much appreciated.
Best Regards,
Rasko LeinonenThanks Marco,
I managed to get out all indexed paths using the following SQL. It took a while to understand how the join the XDB.X$PT39CW6BJR8W4VVE0G0LLGA0OCR5 and XDB.X$QN39CW6BJR8W4VVE0G0LLGA0OCR5 tables together but got there in the end. This helps to clarify which XPaths are being currently indexed by the XMLIndex.
begin
for v_row in (select PATH from XDB.X$PT39CW6BJR8W4VVE0G0LLGA0OCR5)
loop
declare
v_i BINARY_INTEGER := 1;
v_id raw(8);
v_len BINARY_INTEGER := 2;
v_skip BINARY_INTEGER := 1;
begin
while v_i < utl_raw.length(v_row.path) and
v_i + v_len <= utl_raw.length(v_row.path)
loop
v_i := v_i + v_skip;
v_id := utl_raw.substr(v_row.path, v_i, v_len);
--dbms_output.put_line(v_id);
for v_row2 in (select LOCALNAME, flags from XDB.X$QN39CW6BJR8W4VVE0G0LLGA0OCR5
where ID = v_id )
loop
if rawtohex(v_row2.flags) = '01'
then
dbms_output.put('@');
end if;
dbms_output.put(v_row2.localname);
if v_i + v_len < utl_raw.length(v_row.path)
then
dbms_output.put('/');
end if;
end loop;
v_i := v_i + v_len;
end loop;
dbms_output.put_line('');
end;
end loop;
end;
Example output:
RUN
RUN/@accession
RUN/@alias
RUN/@instrument_model
RUN/@run_date
RUN/@run_center
RUN/@total_data_blocks
RUN/EXPERIMENT_REF
RUN/EXPERIMENT_REF/@accession
RUN/EXPERIMENT_REF/@refname
RUN/DATA_BLOCK
RUN/DATA_BLOCK/@name
RUN/DATA_BLOCK/@total_spots
RUN/DATA_BLOCK/@total_reads
RUN/DATA_BLOCK/@number_channels
RUN/DATA_BLOCK/@format_code
RUN/DATA_BLOCK/@sector
RUN/DATA_BLOCK/FILES
RUN/DATA_BLOCK/FILES/FILE
RUN/DATA_BLOCK/FILES/FILE/@filename
RUN/DATA_BLOCK/FILES/FILE/@filetype
RUN/RUN_ATTRIBUTES
RUN/RUN_ATTRIBUTES/RUN_ATTRIBUTE
RUN/RUN_ATTRIBUTES/RUN_ATTRIBUTE/TAG
RUN/RUN_ATTRIBUTES/RUN_ATTRIBUTE/VALUE -
How to read the TEXT TABLE (or) .CSV in HSQLDB Standalone using Java
Hi, I like to use the text tables in our application. And like to use the HSQL Database Engine as Standalone. I created the text tables, those are stored in the disk as ".CSV" files. But i am unable to read that text table (.CSV) when i relogin and gave the CSV file path as URL along with "jdbc:hsqldb:file". So can anybody give me the tips how use the text tables.
Regards,
VinayYou need to make a URLConnection to the page you want, and get the input stream from that page.
The contents of the input stream (use a reader to get them) will give you the raw HTML code. Use a parser to get the actual content. I dont know of a parser offhand, but you can search for one. -
How to Access the XI_AF_MSG table
Hi Experts,
I request you to please let meknow that how can I access the different JAVA tables like XI_AF_MSG table or the AUDIT tables.
Actualy my requirement is to trace the audit log for a particular message ID in the Adapter Engine.This audit trace is not same as the audit log that found in the PI7.0 Runtime WorkBench.For exmaple.--
a message failed in the adapter engine after successfuly processed from PI Integration Engine. Now I want to trace the USERID by whom the message is resent or canceled.
please let me know how can I achive this and how can I access the different tables in JAVA layer.
Thanks
Sugata Bagchi MajumderThese 3 are the tables that are for XI Adapter in ABAP Stack.
SWFRXICNT
SWFRXIHDR
SWFRXIPRC
You can also try the following tables.
SXMSAEADPMOD XI: Adapter and Module Information
SXMSAEADPMODCHN XI: Adapter Module Chains
SXMSAEAGG XI: Adapter Runtime Data (Aggregated)
SXMSAERAW XI: Adapter Runtime Data (Raw Data)
Cheers,
Sarath.
Award if helpful. -
ALL_TAB_COLUMNS table name showing with fancy characters
Dear All
when i execure followowing query and got a table name as BIN$eeHkLT/tL+PgRAAhKBTEWA==$0
select * from all_tab_columns
where column_name like '%APPLICATIONORDER%'
Please let me know what is this and how to select the data from it
thanking youits recycle bin ; please refer to oracle documentation to understand this feature;
see below from oracle documentaion : (http://www.oracle.com/technology/pub/articles/10gdba/week5_10gdba.html)
Managing the Recycle Bin
If the tables are not really dropped in this process--therefore not releasing the tablespace--what happens when the dropped objects take up all of that space?
The answer is simple: that situation does not even arise. When a tablespace is completely filled up with recycle bin data such that the datafiles have to extend to make room for more data, the tablespace is said to be under "space pressure." In that scenario, objects are automatically purged from the recycle bin in a first-in-first-out manner. The dependent objects (such as indexes) are removed before a table is removed.
Similarly, space pressure can occur with user quotas as defined for a particular tablespace. The tablespace may have enough free space, but the user may be running out of his or her allotted portion of it. In such situations, Oracle automatically purges objects belonging to that user in that tablespace.
In addition, there are several ways you can manually control the recycle bin. If you want to purge the specific table named TEST from the recycle bin after its drop, you could issue
PURGE TABLE TEST;
or using its recycle bin name:
PURGE TABLE "BIN$04LhcpndanfgMAAAAAANPw==$0";
This command will remove table TEST and all dependent objects such as indexes, constraints, and so on from the recycle bin, saving some space. If, however, you want to permanently drop an index from the recycle bin, you can do so using:
purge index in_test1_01;
which will remove the index only, leaving the copy of the table in the recycle bin.
Sometimes it might be useful to purge at a higher level. For instance, you may want to purge all the objects in recycle bin in a tablespace USERS. You would issue:
PURGE TABLESPACE USERS;
You may want to purge only the recycle bin for a particular user in that tablespace. This approach could come handy in data warehouse-type environments where users create and drop many transient tables. You could modify the command above to limit the purge to a specific user only:
PURGE TABLESPACE USERS USER SCOTT;
A user such as SCOTT would clear his own recycle bin with
PURGE RECYCLEBIN;
You as a DBA can purge all the objects in any tablespace using
PURGE DBA_RECYCLEBIN;
As you can see, the recycle bin can be managed in a variety of different ways to meet your specific needs. -
How to write select query for all the user tables in database
Can any one tell me how to select the columns from all the user tables in a database
Here I had 3columns as input...
1.phone no
2.memberid
3.sub no.
I have to select call time,record,agn from all the tables in a database...all database tables have the same column names but some may have additional columns..
Eg: select call time, record,agn from ah_t_table where phone no= 6186759765,memberid=j34563298
Query has to execute not only for this table but for all user tables in the database..all tables will start with ah_t
I am trying for this query since 30days...
Help me please....any kind of help is appreciated.....Hi,
user13113704 wrote:
... i need to include the symbol (') for the numbers(values) to get selected..
eg: phone no= '6284056879'To include a single-quote in a string literal, use 2 or them in a row, as shown below.
Starting in Oracle 10, you can also use Q-notation:
http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/sql_elements003.htm#i42617
...and also can you tell me how to execute the output of this script. What front end are you using? If it's SQL*Plus, then you can SPOOL the query to a file, and then execute that file, like this:
-- Suppress SQL*Plus features that interfere with raw output
SET FEEDBACK OFF
SET PAGESIZE 0
-- Run preliminary query to generate main query
SPOOL c:\my_sql_dir\all_ah_t.sql
SELECT 'select call time, record, agn from '
|| owner
|| '.'
|| table_name
|| ' where phone_no = ''6186759765'' and memberid = j34563298'
|| CASE
WHEN ROW_NUMBER () OVER ( ORDER BY owner DESC
, table_name DESC
) = 1
THEN ';'
ELSE ' UNION ALL'
END AS txt
FROM all_tables
WHERE SUBSTR (table_name, 1, 4) = 'AH_T'
ORDER BY owner
, table_name
SPOOL OFF
-- Restore SQL*Plus features that interfere with raw output (if desired)
SET FEEDBACK ON
SET PAGESIZE 50
-- Run main query:
@c:\my_sql_dir\all_ah_t.sql
so that i form a temporary view for this script as a table(or store the result in a temp table) and my problem will be solved..Sorry, I don't understand. What is a "temporary view"? -
What are the other tables in B2 cluster
hi experts,
what are the other tables like ZL table in B2 cluster..and what data is stored in them.Results related to a time evaluation period in cluster B2
PSP
Personal work schedule
Time evaluation result
For each day
QTACC
Generation of quota entitlement
Time evaluation result
For each day
QTTRANS
Transfer pool
Time evaluation result
For each day
ZES
Time balances for each day
Time evaluation result
For each day
SALDO
Cumulated time balances
Time evaluation result
Time evaluation period
ZL
Time wage types
Time evaluation result
For each day
VS
Variable balances
Time evaluation result
For each day
CVS
Cumulated variable balances
Time evaluation result
Time evaluation period
FEHLER
Messages
Time evaluation result
For each day
PT
Time pairs
Raw data/time evaluation result
For each day
KNTAG
Core night work (relevant only for the German country version)
Time evaluation result
For each day
Results related to a time period in cluster B2
Table
Description
Origin
Time Dependency
VERT
Substitutions
Copy of infotype 2003
Period
ABWKONTI
Absence quotas
Copy of infotype 2006
Period
AB
Absences
Copy of infotype 2001
Period
ANWES
Attendances
Copy of infotype 2002
Period
RUFB
On-call availability
Copy of infotype 2004
Period
MEHR
Overtime
Copy of infotype 2005
Period
ANWKONTI
Attendance quotas
Copy of infotype 2007
Period
SKO
Time transfer specifications
Copy of infotype 2012
Period
ALP
Different payment
Raw data
Pointer to table entry
C1
Cost distribution
Raw data
Pointer to table entry
Edited by: BALAPANI on Oct 19, 2009 10:38 AM -
Percentage of Total Count of Category in Raw Data Column in Pivot Table
I have normalized my data in Excel and I have a column: "question 1" and then a column for "gender" which has values as "female" and "male". I have created a pivot table for these two variables, "Gender" and
"Question 1". My values for Females and Males from the Gender data are presented within my pivot table. I want to present these numbers as percentages of Total "Females" and "Males" that are in the "Gender" column in
my master dataset. Pivot tables will only allow me to do %'s of data already in the pivot table
I am attempting to use Calculated Fields within Pivot Tables to resolve this. I do not want to create a separate column in my master data set, but rather, complete all calculations within the pivot tableThank you very much,
The steps you provided use the total "gender" within the pivot table as the "grand total".
If I have a binomial variable, "gender" with "male" as one of the variables, I need the grand total be the total male in the raw data.
Example: if there are 100 total males in the raw data for the column "gender"
and I construct a pivot table with a dependent variable which is a survey question and "gender" and 30 males "agreed" with the survey question and 20 females "agreed" with the survey question, your instructions for % of grand total would show 60% for
males who agree [males + females who replied to the survey question],
when, what I need is the total males in the raw data, 100, used as the total so that the percentage of males who answered "agreed" to the survey question is 30% of all males.
Thank you very much -
How we will know that dimension size is more than the fact table size?
how we will know that dimension size is more than the fact table size?
Hi,
Let us assume that we are going to take Division and distribution channel in a dimension and assume we have 20 distinct values for Division in R/3 and 30 Distinct values for Distribution channel .So Maximum, we can get 20 * 30 records in dimension table and we can take rough estimation of records in the cube by observing the raw data in source system.
With rgds,
Anil Kumar Sharma .P -
How many bytes does a refresh check on the DDLOG table cost?
Hello,
each application server checks after "rdisp/bufrefreshtime" in the DDLOG table on the database whether one of it's tables or entries of tables are invalid.
Only tables or entries of tables that are invalid appear in the DDLOG table. If an application servers gets to know which of his tables are invalid, he can synchronize with the next read access.
Does anybody know how many bytes such a check cost?
The whole DDLOG must be read by each application server, so it depends on the number of entries in the DDLOG.
Does anybody know?
thx, holgerHi,
except of system and some timestamps, everything is stored in a raw field.
Checking FM SBUF_SEL_DDLOG_RECS I found some additional info:
- There are several synchronization classes
- Classes 8 and 16 don't contain table or key info -> complete buffer refresh
- Other classes should have a table name
- In this case is an option for a key definition
-> I guess, generic and single buffers are handled with corresponding key fields, full buffered is probably handled without key fields.
An entry in DDLOG itself is the flag / mark for: this buffer is invalid.
It's obviously single/generic key specific - otherwise the whole concept of single/generic key would be obsolete.
Christian -
Populate data into the dynamic table ie using field symbols
Dear All,
I need to convert the XML data into internal table. I did this using the guidelines in the forum. Using all those i can get my data
in the format of
Cname Cvalue
id 1
name XX
id 2
name YY
But i need the values in the format of int_tab like,
ID Name
1 XX
2 YY
I used the below code to create the dynamic table strucure.
call method cl_alv_table_create=>create_dynamic_table
exporting
it_fieldcatalog = ifc
importing
ep_table = dy_table.
assign dy_table->* to <itab>.
* Create dynamic work area and assign to FS
create data dy_line like line of <itab>.
assign dy_line->* to <wa>.
So now my strucure will be like ID Name.
I strucked in the place of populating the data into this like 1,XX,2,YY into the dynamic table.
If you come across with this scenario, can anyone suggest me on this.
Regards,
Anita Vizhi Arasi BHi Anita,
Try to understand below given code. It works same as you want. But I used Function module not any method.
TYPES: BEGIN OF ty_xml,
raw(255) TYPE x,
END OF ty_xml.
DATA: lv_file_name TYPE rlgrap-filename,
lit_hdr TYPE TABLE OF ty_hdr,
ls_hdr TYPE ty_hdr,
lv_file TYPE string,
wa_xml TYPE ty_xml,
lit_xml TYPE STANDARD TABLE OF ty_xml,
lv_filename TYPE string ,
ls_xmldata TYPE xstring ,
lit_result TYPE STANDARD TABLE OF smum_xmltb,
ls_result TYPE smum_xmltb,
lit_return TYPE STANDARD TABLE OF bapiret2,
lv_size TYPE i,
lv_count TYPE i.
CONSTANTS: line_size TYPE i VALUE 255.
REFRESH lit_hdr.
*~ File selected from Local System
CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
EXPORTING
program_name = syst-repid
dynpro_number = syst-dynnr
CHANGING
file_name = lv_file_name
EXCEPTIONS
mask_too_long = 1
OTHERS = 2.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
lv_file = lv_file_name.
*~ Upload for Data Provider
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = lv_file
filetype = 'BIN'
has_field_separator = ' '
header_length = 0
IMPORTING
filelength = lv_size
TABLES
data_tab = lit_xml
EXCEPTIONS
OTHERS = 1.
*~ Convert from Binary to String
CALL FUNCTION 'SCMS_BINARY_TO_XSTRING'
EXPORTING
input_length = lv_size
IMPORTING
buffer = ls_xmldata
TABLES
binary_tab = lit_xml
EXCEPTIONS
failed = 1
OTHERS = 2.
*~ Parse XML docment into a table structure
CALL FUNCTION 'SMUM_XML_PARSE'
EXPORTING
xml_input = ls_xmldata " Buffered data
TABLES
xml_table = lit_result " final internal table which contain records
return = lit_return.
LOOP AT lit_result INTO ls_result.
IF ls_result-hier = '3'.
IF ls_result-type = 'V'.
CASE ls_result-cname.
WHEN 'intno'. "Internal Number
ls_hdr-intno = ls_result-cvalue.
WHEN 'acode'. "Article Code
ls_hdr-matnr = ls_result-cvalue.
WHEN 'adesc'. "Article Description
ls_hdr-maktx = ls_result-cvalue.
WHEN 'idesc'. "Item Description
ls_hdr-itmds = ls_result-cvalue.
WHEN 'sdesc'. "Standard Description
ls_hdr-stdds = ls_result-cvalue.
WHEN 'at'. "Article Type
ls_hdr-mtart = ls_result-cvalue.
WHEN 'mc'. "Merchandise Category
ls_hdr-matkl = ls_result-cvalue.
WHEN 'cp'. "Characteristic Profile
ls_hdr-charp = ls_result-cvalue.
CONDENSE ls_hdr-charp.
WHEN 'c1'.
ls_hdr-col01 = ls_result-cvalue.
WHEN 'c2'.
ls_hdr-col02 = ls_result-cvalue.
WHEN 'c3'.
ls_hdr-col03 = ls_result-cvalue.
WHEN 'c4'.
ls_hdr-col04 = ls_result-cvalue.
WHEN 'c5'.
ls_hdr-col05 = ls_result-cvalue.
WHEN 'c6'.
ls_hdr-col06 = ls_result-cvalue.
WHEN 'tc'. "Tax Classification
ls_hdr-taklv = ls_result-cvalue.
WHEN 's'. "Season
ls_hdr-saiso = ls_result-cvalue.
WHEN 'sy'. "Season Year
ls_hdr-saisj = ls_result-cvalue.
WHEN 'fg'. "Fashion Grade
ls_hdr-fashg = ls_result-cvalue.
WHEN 'rm'. "Reference Material
ls_hdr-rfmat = ls_result-cvalue.
WHEN 'fcv'. "Free Character Value
ls_hdr-frecv = ls_result-cvalue.
WHEN 'uom'. "Unit of Measure
ls_hdr-uom = ls_result-cvalue.
WHEN 'pou'. "PO Unit
ls_hdr-pount = ls_result-cvalue.
WHEN 'v'. "Vendor
ls_hdr-lifnr = ls_result-cvalue.
WHEN 'b'. "Vendor
ls_hdr-brand = ls_result-cvalue.
WHEN 'pg'. "Purchasing Group
ls_hdr-wekgr = ls_result-cvalue.
WHEN 'rv'. "Regular Vendor
ls_hdr-rlifn = ls_result-cvalue.
WHEN 'pp'. "Pricing Profile
ls_hdr-sprof = ls_result-cvalue.
WHEN 'sp'. "Sales Price
ls_hdr-spric = ls_result-cvalue.
WHEN 'm'. "Margin
ls_hdr-margn = ls_result-cvalue.
WHEN 'c'. "Calculate
ls_hdr-pcalc = ls_result-cvalue.
WHEN 'purp'. "Purchase Price
ls_hdr-ppric = ls_result-cvalue.
WHEN 'a'. "Assortment
ls_hdr-asort = ls_result-cvalue.
WHEN 'bm'. "Batch Management
ls_hdr-batch = ls_result-cvalue.
WHEN 'mrl'. "Min. Remaining Life
ls_hdr-minrl = ls_result-cvalue.
WHEN 'aag'. "Account Assignment Group
ls_hdr-acass = ls_result-cvalue.
WHEN 'vc'. "Valuation Class
ls_hdr-valcl = ls_result-cvalue.
WHEN 'eancat'. "EAN Category
ls_hdr-eanct = ls_result-cvalue.
WHEN 'ean11'.
ls_hdr-ean11 = ls_result-cvalue.
ENDCASE.
AT END OF hier.
APPEND ls_hdr TO lit_hdr.
ENDAT.
ENDIF.
ENDIF.
ENDLOOP.
APPEND LINES OF lit_hdr TO git_hdr.
DELETE git_hdr WHERE maktx IS INITIAL "Article Description
AND mtart IS INITIAL "Article Type
AND matkl IS INITIAL "Merchandise Category
AND charp IS INITIAL "Characteristic Profile
AND taklv IS INITIAL "Tax Classification
AND uom IS INITIAL "Unit of Measure
AND pount IS INITIAL "PO Unit
AND lifnr IS INITIAL "Vendor
AND brand IS INITIAL "Brand
AND wekgr IS INITIAL "Purchasing Group
AND ppric IS INITIAL "Purchasing Price
AND spric IS INITIAL "Sales Price
AND acass IS INITIAL "A/c Assign. Grp.
AND valcl IS INITIAL "Valuation Class
AND saiso IS INITIAL "Season
AND saisj IS INITIAL. "Season Year
IF git_hdr[] IS NOT INITIAL.
CLEAR: lv_count.
LOOP AT git_hdr INTO ls_hdr.
lv_count = lv_count + 1.
ls_hdr-intno = lv_count.
MODIFY git_hdr FROM ls_hdr TRANSPORTING intno.
CLEAR: ls_hdr.
ENDLOOP.
ENDIF.
Code written is part of my program. Try to understand it. I hope it will help you out.
Regards,
Narendra -
A partition tab is needed in the schema-table browser and it's missing
After a first sight with the product, I couldn't find a partition tab in the schema-table browser.
For those having partitions it is absolutely essential.It's easy to add,
create a file that has the following content
<?xml version="1.0" encoding="UTF-8"?>
<items>
<item type="sharedQuery" id="PartSubPartkeys">
<query minversion="9">
<sql>
<![CDATA[ select 'PARTITION KEYS' PARTITION_LEVEL,substr(sys_connect_by_path(column_name,','),2) "PARTITION KEYS"
from (select column_name, column_position
from all_part_key_columns
where owner = :OBJECT_OWNER
and name = :OBJECT_NAME
and object_type='TABLE' )
start with column_position=1
connect by column_position=prior column_position+1
union all
select 'SUBPARTITION KEYS' ,substr(sys_connect_by_path(column_name,','),2)
from (select column_name, column_position
from all_subpart_key_columns
where owner = :OBJECT_OWNER
and name = :OBJECT_NAME
and object_type='TABLE' )
start with column_position=1
connect by column_position=prior column_position+1]]></sql>
</query>
</item>
<item type="sharedQuery" id="PartSubPartkeysFI">
<query minversion="9">
<sql>
<![CDATA[ select 'PARTITION KEYS' PARTITION_LEVEL,substr(sys_connect_by_path(column_name,','),2) "PARTITION KEYS"
from (select column_name, column_position
from all_part_key_columns
where owner = :OBJECT_OWNER
and name = (select table_name
from all_indexes
where index_name=:OBJECT_NAME
and owner=:OBJECT_OWNER)
and object_type='TABLE' )
start with column_position=1
connect by column_position=prior column_position+1
union all
select 'SUBPARTITION KEYS' ,substr(sys_connect_by_path(column_name,','),2)
from (select column_name, column_position
from all_subpart_key_columns
where owner = :OBJECT_OWNER
and name =(select table_name
from all_indexes
where index_name=:OBJECT_NAME
and owner=:OBJECT_OWNER)
and object_type='TABLE' )
start with column_position=1
connect by column_position=prior column_position+1]]></sql>
</query>
</item>
<item type="sharedQuery" id="Partitions">
<query minversion="9">
<sql>
<![CDATA[ select partition_name, num_rows,AVG_ROW_LEN, blocks ,LAST_ANALYZED from all_tab_partitions where table_owner = :OBJECT_OWNER and table_name = :OBJECT_NAME order by partition_position]]></sql>
</query>
</item>
<item type="sharedQuery" id="SubPartitions">
<query minversion="9">
<sql>
<![CDATA[ select subpartition_name, partition_name, num_rows,AVG_ROW_LEN, blocks ,LAST_ANALYZED from all_tab_subpartitions where table_owner = :OBJECT_OWNER and table_name = :OBJECT_NAME order by partition_name,subpartition_position]]></sql>
</query>
</item>
<item type="editor" node="TableNode" >
<title><![CDATA[Partitions/SubPartitions]]></title>
<query id="PartSubPartkeys" />
<subquery>
<title>Partitions/SubPartition</title>
<query>
<sql><![CDATA[select partition_position, partition_name "Partition/Subpartition", tablespace_name,high_value,compression,num_rows,AVG_ROW_LEN, blocks ,LAST_ANALYZED from all_tab_partitions where table_owner = :OBJECT_OWNER and table_name = :OBJECT_NAME and 'PARTITION KEYS'=:PARTITION_LEVEL
union all
select subpartition_position, partition_name||'/'||subpartition_name, tablespace_name,high_value,compression,num_rows,AVG_ROW_LEN, blocks ,LAST_ANALYZED from all_tab_subpartitions where table_owner = :OBJECT_OWNER and table_name = :OBJECT_NAME and 'SUBPARTITION KEYS' =:PARTITION_LEVEL
order by 2]]></sql>
</query>
</subquery>
</item>
<item type="editor" node="MViewNode" >
<title><![CDATA[Partitions/SubPartitions]]></title>
<query id="PartSubPartkeys" />
<subquery>
<title>Partitions/SubPArtition</title>
<query>
<sql><![CDATA[select partition_position, partition_name "Partition/Subpartition", tablespace_name,
high_value,compression,num_rows,AVG_ROW_LEN, blocks ,LAST_ANALYZED
from all_tab_partitions where table_owner = :OBJECT_OWNER and table_name = :OBJECT_NAME and 'PARTITION KEYS'=:PARTITION_LEVEL
union all
select subpartition_position, partition_name||'/'||subpartition_name, tablespace_name,high_value,
compression,num_rows,AVG_ROW_LEN, blocks ,LAST_ANALYZED
from all_tab_subpartitions where table_owner = :OBJECT_OWNER and table_name = :OBJECT_NAME and 'SUBPARTITION KEYS' =:PARTITION_LEVEL
order by 2]]></sql>
</query>
</subquery>
</item>
<item type="editor" node="IndexNode" >
<title><![CDATA[Partitions/SubPartitions]]></title>
<query id="PartSubPartkeysFI" />
<subquery>
<title>Partitions/SubPArtition</title>
<query>
<sql><![CDATA[select partition_position, partition_name "Partition/Subpartition", tablespace_name,high_value,compression,
Leaf_Blocks, Distinct_Keys, clustering_factor ,LAST_ANALYZED
from all_ind_partitions where index_owner = :OBJECT_OWNER and index_name = :OBJECT_NAME
and 'PARTITION KEYS'=:PARTITION_LEVEL
union all
select subpartition_position, partition_name||'/'||subpartition_name, tablespace_name,high_value,compression,
Leaf_Blocks, Distinct_Keys, clustering_factor ,LAST_ANALYZED
from all_ind_subpartitions
where index_owner = :OBJECT_OWNER
and index_name = :OBJECT_NAME
and 'SUBPARTITION KEYS'=:PARTITION_LEVEL
order by 2]]></sql>
</query>
</subquery>
</item>
<item type="editor" node="TableNode">
<title><![CDATA[Unabridged SQL]]></title>
<query>
<sql><![CDATA[select :OBJECT_OWNER OOWNER, :OBJECT_NAME ONAME, 'TABLE' OTYPE from dual union all select owner,index_name,'INDEX' from all_indexes where table_owner= :OBJECT_OWNER and table_name=:OBJECT_NAME ]]></sql>
</query>
<subquery type="code">
<query>
<sql><![CDATA[select dbms_metadata.get_ddl(:OTYPE,:ONAME, :OOWNER) "SQL Statements" from dual]]></sql>
</query>
</subquery>
</item>
<item type="editor" node="TableNode">
<title><![CDATA[Partition Columns Statistics]]></title>
<query id="Partitions" />
<subquery>
<query>
<sql>
<![CDATA[ select COLUMN_NAME, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE, DENSITY, NUM_NULLS
from all_part_col_statistics where owner = :OBJECT_OWNER
and table_name = :OBJECT_NAME
and partition_name= :PARTITION_NAME
order by column_name]]></sql>
</query>
</subquery>
</item>
<item type="editor" node="TableNode">
<title><![CDATA[SUBPartition Columns Statistics]]></title>
<query id="SubPartitions" />
<subquery>
<query>
<sql>
<![CDATA[ select COLUMN_NAME, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE, DENSITY, NUM_NULLS from all_subpart_col_statistics where owner = :OBJECT_OWNER and table_name = :OBJECT_NAME and subpartition_name=:SUBPARTITION_NAME order by column_name]]></sql>
</query>
</subquery>
</item>
<item type="editor" node="MViewNode">
<title><![CDATA[Partition Columns Statistics]]></title>
<query id="Partitions" />
<subquery>
<query>
<sql>
<![CDATA[ select COLUMN_NAME, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE, DENSITY, NUM_NULLS
from all_part_col_statistics where owner = :OBJECT_OWNER
and table_name = :OBJECT_NAME
and partition_name= :PARTITION_NAME
order by column_name]]></sql>
</query>
</subquery>
</item>
<item type="editor" node="MViewNode">
<title><![CDATA[SUBPartition Columns Statistics]]></title>
<query id="SubPartitions" />
<subquery>
<query>
<sql>
<![CDATA[ select COLUMN_NAME, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE, DENSITY, NUM_NULLS from all_subpart_col_statistics where owner = :OBJECT_OWNER and table_name = :OBJECT_NAME and subpartition_name=:SUBPARTITION_NAME order by column_name]]></sql>
</query>
</subquery>
</item>
<item type="editor" node="SchemaFolder" minversion="10.1">
<title><![CDATA[Sessions]]></title>
<query>
<sql><![CDATA[select sid,serial#,program,last_call_et,machine, status, sql_hash_value shv,sql_child_number scn
from v$session
order by 1]]></sql>
</query>
<subquery>
<query>
<sql><![CDATA[select * from table(dbms_xplan.display_cursor(:SHV,:SCN))]]></sql>
</query>
</subquery>
</item>
</items>
and add the following line to your ide.conf file (in jdev/bin directory in the sqldev install dir)
AddVMOption -Draptor.user.editors=fullpathofthefile(dir and name)
and restart, you'll get several additional tabs to the ones displayed for tables.
enjoy -
Will Elements and Lightroom open RAW files fro the Sony DSCRX-100 camera?
Will elements and l;ightroom open RAW files from the Sony DSCRX-100 camera ?
202daansn wrote:
Will elements and l;ightroom open RAW files from the Sony DSCRX-100 camera ?
I have the RX100 and confess to be an Adobe fan. You did not say which Lightroom or which elements you want to use. You need the latest versions.
Photoshop Elements 11 comes with Adobe Camera RAW (ACR) version 7.0 in place when you install it. Under the help menu, there is a choice to check for updates. If you do, ACR 7.2 installs. The RX100 is included in 7.2. Previous versions of Photoshop Elements will not udate to 7.2. When you open a RAW photo, a screen pops up that lets you adjust, convert and save to something Photoshop Elements can handle.
You have to have Lightroom 4.2 to get RX100 usefulness. Lightroom uses, but hides, the ACR engine. Instead the slider controls are built in the user interface. When you buy it, it installs as 4.0. The help menu also has an update choice so that you can get to 4.2 from 4.0.
You didn't ask, but Premier Elements 11 is a good choice for video because it is he first version that "officially" supports the high quality 1080p60 or "PS" setting in the RX100.
On T day, I shot RAW photos of my granddaughters dancing to a Wii game in terrible light. Yesterday I "adjusted" the batch of RAW shots taken with the RX100 in Lightroom and saved them as .jpg files. Today, I wll make a video in Premier Elements from those .jpg images, put in some pan and zoom effects, add a sound track and put it on Vimeo.
My view is that an RX100 owner needs PSE 11, PrE 11 and Lightroom 4 to take advantage of the camera's capabilties. Setting it on "fine" .jpg and uploading snapshots to Flicker leaves an awful lot on the table!
Bill
Maybe you are looking for
-
Problem with the # in upload file of application server.
Hi All, I have a to upload a unix file which have # after every field. i tried " OPEN DATASET x_file FOR INPUT IN TEXT MODE ENCODING DEFAULT IGNORING CONVERSION ERRORS." but its not working .would appreciate if you can reply to this on immediate
-
1 HFS volume needs repair -- can't boot from OS X Tiger "kernel panic"
I just searched the support discussions and did find an answer so hopefully someone can help me. Today I ran Disk Utility on both my G4 powerbook and G5 desktop. Both have Tiger 10.4.8 on them and have been running good. To my surprise both came back
-
Saving a PDF in Acrobat 9 with comments enabled for viewer
Hi, Is there a quicker way to enable comments for Reader than actually having to choose to do so? I have maybe 30-40 files saved as a PDF but they do not have comments enabled so I have to open each one and click on enable comments for Reader. That t
-
Simultaneous multi track recording in Logic Pro7 using MOTU 828 mk2
I would like to use the eight separate audio inputs on my MOTU 828 and connect them directly in line to eight channel strips/tracks in Logic Pro 7 v 7.2.1. (presumably through the firewire interface). I can't find any documentation in either the Lo
-
Dpm 2012 remote administration - windows 7
HI, I am doing a POC with DPM 2012 RC. DPM2012 server has been setup and is working so far. I am trying to connect the DPM console for administration using Remote administration feature installed on my windows 7 machine. I am unable to connect and g