Usage of HZ_RELATIONSHIPS table in TCA
Hi 2 all
Anybody know the usage of following column of HZ_RELATIONSHIPS tables in TCA
Object_table_name
Subject_table_name
Kindly post the reply with example .
Zulqarnain
plz re-check etrm
>
The SUBJECT_ID and OBJECT_ID columns specify the relationship that exists between two parties.
SUBJECT_TABLE_NAME Source table name for the subject
OBJECT_TABLE_NAME Source table name for the object
Similar Messages
-
Hi,
I do a usage decision for a inspection lot of 10 qty in QA32. I enter UD code A( accept) and quantity as 4 and save.stock gets posted. Again in QA32 I enter UD code as A1 ( conditional accept) for the same inspection lot and post remaining qty 6 .
In what table can I get qty as 4 and 6 for a given inspection lot. Kindly help.
Regards.Hi,
In that functional module, I can get details of the last UD code which I have entered during Usage decisions.Previous UD codes are not there.Same is happening in table QAMB also.
Pls guide.
Regards. -
How do I calculate disk usage for a table
Is there a formula or equation that I can use to calculate disk usage (in MB) for a table?
I would like to foind out what's the best option for initial storage space and monthly growth.Hi,
We will discuss this by taking a simple example:
SQL>Create table T_REGISTRATION (Name varchar2 (100),
Fathers_Name varchar2 (100),
Age number(5),
Sex char (1),
Data_of_Birth Date)
storage (initial 40k next 40k minextents 5)
Tablespace TBS1;
Table created.
SQL> insert into T_REGISTRATION values ('Name_1','Name_1_Father',40,'M','24-FEB-66');
1 row created.
SQL> Commit;
Commit complete
SQL> analyze table T_REGISTRATION compute statistics;
Table analyzed.
SQL> compute sum of blocks on report
SQL> break on report
SQL> select extent_id, bytes, blocks
from user_extents
where segment_name = 'T_REGISTRATION'
and segment_type = 'TABLE';
EXTENT_ID BYTES BLOCKS
0 65536 8
1 65536 8
2 65536 8
3 65536 8
Sum 32
SQL> clear breaks
SQL> select blocks, empty_blocks,
avg_space, num_freelist_blocks
from user_tables
where table_name = 'T_REGISTRATION';
BLOCKS EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
5 27 8064 0
From above:-
1. We have 32(Sum) blocks allocated to the table
2. 27 blocks are totally empty
3. 5 blocks contains data(BLOCKS+EMPTY_BLOCKS = SUM i.e 5+27=32)
4. We have an average of about 8064 bytes = 7.8 free on each block used (8064/1024=7.8K From above AVG_SPACE value).
Therefore our table
1. Consumes 5 blocks
2. In this 5blocks * 8k blocksize - 5 blocks * 7.8K free = 1k is used for inserted data.
Source for this example :- asktom.oracle.com
Thanks,
Sankar -
Usage of Error Table in OWB mapping
WE uses ERROR tables to take care of defective items in a maping. We want the rows in the error-table to remain and acumulate for each loading. That is, we do not want to truncate between running the mapping. We have:
Truncate table error = NO
Roll-up errors = NO
But the table is truncated between each load anyway.
Does anyone have any experience of how to make the table not to truncated?
We are using OWB 11.2.0.1
/JohnHi John
This looks to be fixed in the 11.2.0.2 OWB patch.
The bug is 9661088, otherwise you'd have to move/process the errors before re-run.
Cheers
David -
Reg Usage of ODS Table in Transformation
Hai All,
I am new to SAP NetWeaver BI. Can i use the ODS Table in the transformation rules ?
Is this the right approach or not ? My requirement is i have to poulate some infomration into new ODS depending on the data which is available in othere ods table.
Regards
PrashanthHi Prashant,
Please refer to the sources below to understand Transformations...
<a href="http://help.sap.com/saphelp_nw2004s/helpdata/EN/e3/e60138fede083de10000009b38f8cf/frameset.htm">http://help.sap.com/saphelp_nw2004s/helpdata/EN/e3/e60138fede083de10000009b38f8cf/frameset.htm</a>
<a href="http://help.sap.com/saphelp_nw2004s/helpdata/EN/e3/e60138fede083de10000009b38f8cf/frameset.htm">http://help.sap.com/saphelp_nw2004s/helpdata/EN/e3/e60138fede083de10000009b38f8cf/frameset.htm</a>
Also refer to this white paper in SDN on Transformations
<a href="https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/6090a621-c170-2910-c1ab-d9203321ee19">https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/6090a621-c170-2910-c1ab-d9203321ee19</a>
To ensure good performance, use the Start or End Routine to execute such lookups on DSO tables.
Regards,
Shrikant -
@prompt functions usage in Derived table
How can we use prompt functions in Derived table?.Please repaly ASAP.
Advanced thanks.
Thanks&Regards
RamuHi Ramu,
It is quite straight foward.You define your derived table using the normal sql but then put in the @prompt code in the where clause.
This worked for me in XI 3.0 using the Efashion universe.
SELECT
Article_Lookup_criteria.Article_Lookup_criteria_id,
Article_Lookup_criteria.Article_criteria_id,
Article_Lookup_criteria.criteria_type
FROM
Article_Lookup_criteria
WHERE
Article_Lookup_criteria.criteria_label = @prompt('Label,'A',,Mono,Free,Presistent)
Regards
Alan -
Usage of Temp tables in SSIS 2012
Hello,
We have many SSIS packages (2008 R2) which imports data to a temp table and process it from there.
We are upgrading to SQL server 2012 and facing the issue with temp table as working table and our ssis packages fail in 2012. While investigating found that SQL Server 2012 deprecates FMTONLY option
and instead uses
sp_describe_first_result_set , which does not support using of temp tables as import table. SSIS works fine in our workstations but not in the DEV box. With SQL 2012, I can execute from my workstation, which has (11.0.2100.60) where as DEV server
has SQL Server version 11.0.3000.0
Also when I ran profile with that of the DEV box, it gives two different statements
from workstation (11.0.2100.60)
CREATE TABLE #temp (
Id varchar(255) NULL,
Name varchar(255) NULL )
go
declare @p1 int
set @p1=NULL
declare @p3 int
set @p3=229378
declare @p4 int
set @p4=294916
declare @p5 int
set @p5=NULL
exec sp_cursoropen @p1 output,N'select * from #temp',@p3 output,@p4 output,@p5 output
select @p1, @p3, @p4, @p5
go
it works fine
But with the DEV server (version 11.0.3000.0), it executes the below sql and it fails to get the meta data
CREATE TABLE #temp (
Id varchar(255) NULL,
Name varchar(255) NULL )
exec [sys].sp_describe_first_result_set N'select * from [dbo].[#temp]'
On checking the assembly difference between the versions, I could only see Microsoft.SqlServer.ManagedDTS.dll being 11.0.3000.0, which I replace by 11.0.2100.60 version. but still getting the same result.
The other different I found is with ,Net framework libraries.
Could you advise whats the assembly causing this issue between our workstation and DEV server i.e 11.0.2100.60 and 11.0.3000.0
Many thanksScripts are taken from profiler.
The error message is
The metadata could not be determined because statement 'Select * from #branchscan' uses a temp table.
I could see the work around saying use of table variable and global temp tables. We are having around 100+ ssis packages which uses temp table for loading the data from a flat file and respective SP to process the data from the temp table. above
error is thrown during the pre-execute phase of the OLE db Destination, when trying to get the meta data of the table.
At this stage, it would be difficult for us to change the logic to global temp or TVP
Thanks -
Error in usage of aggregate tables
Hi friends-
I have created script for Aggregate Table its Good-
but when i want to use this it shows error
My command in command prompt:
C:\MW_HOME2\Oracle_BI1\bifoundation\server\bin>nqcmd.exe -u Administrator -p Admin123 -d coreapplication_OH1341941094 -s c:\test.sql
and Error
Argument error near: ûd
Command: nqcmd.exe - a command line client which can issue SQL statements
against either Oracle BI server or a variety
of ODBC compliant backend databases.
need help-
@ziaHi Zia,
I suspect the BI Server is not able to understand the command just at the "-d" parameter. From the command of yours, I notice that the commands rolls over to a new line just at the -d parameter.
I think there is no space between the -d and <DSN NAME> of yours in the command. Just try giving a space between them and see if it works.
Hope this helps.
Thank you,
Dhar -
Suggestions On Table design - Usage of Nested Tables.
Hi All,
DB Vesion - 10.2.0.1.0
We have an existingtable say, REPORT_MASTER with more than 4 Million records.( 27 Columns )
Now the requirement is as follows: Every day, a job has to run to upadte each row in the REPORT_MASTER table. ( COLUMNs to be updated are new columns -to be designed)
Rules for the updation are generated dynamically depending on a parameter table.
In total, more than 500 update statement will run for updating the whole table (it is not possible reduce the number of updations). Now after the updation the records in the table will be like
ExistingColumns NewColumn1 NewColumn2 NewColumn3 ........
........ BRF01.1 BRF02.5 BRF03.1 .......
........ BRF01.3 BRF02.1 BRF03.2 .......
........ ...... ...... ...... ......First few updates, for example will update "NewColumn1". Next few updates will update "NewColumn2" and so on..
Now My Query is about designing the new columns:
1.If I did as the above sample output there should be 37 new columns as per current requiremnent. It can increase when new report is required.
2.If I go for a Nested Table, my concern is about the update statement. Is there any way to achieve the below code..
update report_master
set new_nested_tab_col(5) = 'BRF02.6'
where .....
And also how will be the performance?
3.If I go for a detail table, it will have a size of almost 37 times that of master_table.
I have to loop through this table (ofcourse with joining to the master table) 37 times again afetr updation for report generation. (This is required anyhow, but here data volume is high)
4. Other methode, ofcourse not very professional, is to keep a single varchar2 column and update that column, so that it will have value like
BRF01.1,BRF02.3,BRF03.1....
Or any other better alternative..?
Please suggest.
Thanks in advance,
Jeneesh
Edited by: jeneesh on Apr 13, 2009 11:36 AM
Update statements will be like as follows, if the basic design is selected :
update REPORT_MASTER set NEW_COLUMN1 = 'BRF01.1' where dynamic_condition1 and NEW_COLUMN1 is not null;
update REPORT_MASTER set NEW_COLUMN1 = 'BRF01.2' where dynamic_condition2 and NEW_COLUMN1 is not null;
update REPORT_MASTER set NEW_COLUMN2 = 'BRF02.1' where dynamic_condition25 and NEW_COLUMN2 is not null;
...Edited by: jeneesh on Apr 13, 2009 12:40 PM
Taking it to front....
Edited by: jeneesh on Apr 14, 2009 8:37 AM
Still I am not comfortable to use 37 columns..APC wrote:
Not sure what you're expecting from us. I'm afraid you haven't explained your scenario clearly, so it's difficult to offer design advice. I will try to explain better:
Basically, requirement is to generate 37 (BRF01 to BRF37) reports from the table CRB_OUTPUT_VIEW_TAB.
(This is the Master Data Table - In the original thread I have mentioed it as REPORT_MASTER).
Each report needs to apply different rules. For this, a RULE table(Parameter table) is provided.
Master Data Table
SQL> desc crb_output_view_tab
Name Null? Type
COMPANY VARCHAR2(240)
GLNO VARCHAR2(144)
CURRENCY VARCHAR2(15)
CUSTOMER_NAME VARCHAR2(240)
ACC_DEAL_NO VARCHAR2(29)
FCY VARCHAR2(18)
LCY VARCHAR2(18)
INT_EXH_RT NUMBER
VALUE_DATE VARCHAR2(10)
MAT_DATE VARCHAR2(10)
CUSTOMER VARCHAR2(50)
BRANCH VARCHAR2(25)
INT_BASIS VARCHAR2(10)
OGLKEY VARCHAR2(207)
SECTOR VARCHAR2(6)
INDUSTRY VARCHAR2(6)
TARGET VARCHAR2(2)
RESIDENCE VARCHAR2(2)
NATIONALITY VARCHAR2(2)
REPORT_DATE VARCHAR2(11)
GL_DESC VARCHAR2(50)
PORTFOLIO VARCHAR2(10)
LOAN_TYPE VARCHAR2(10)
EXH_RT NUMBER
BLSHEET VARCHAR2(50)
SOURCE CHAR(3)
CATEGORY VARCHAR2(6)
BRF_CODE VARCHAR2(20)
DR_CR VARCHAR2(6)
NA VARCHAR2(6)
Rules - Master Table
SQL> select * from brf_parameters order by brf_id,seq;
PARAM_ID BRF_ID SEQ BRF_CODE COLUMN_NAME DATA_TYPE
2 BRF01 1 BRF010001 GLNO CHAR
1 BRF01 1 BRF010001 CURRENCY CHAR
3 BRF01 2 BRF010002 GLNO CHAR
4 BRF01 3 BRF010006 GLNO CHAR
6 BRF01 4 BRF010008 GLNO CHAR
5 BRF01 4 BRF010008 RESIDENCE CHAR
390 BRF02 1 BRF020001 RESIDENCE CHAR
391 BRF02 1 BRF020001 BRF_PARENT CHAR
392 BRF02 2 BRF020002 RESIDENCE CHAR
393 BRF02 2 BRF020002 INDUSTRY CHAR
394 BRF02 2 BRF020002 BRF_PARENT CHAR
Rules - detail Table: (Linked to brf_parameters by param_id)
SQL> select * from brf_param_values where param_id in (1,2);
PARAM_ID CONDITION VALUE1 VALUE2
1 IN AED
2 IN 0010
SQL> ed
Wrote file afiedt.buf
1 select p.seq,p.brf_id,p.brf_code,p.column_name,v.condition,v.value1,v.value2
2 from brf_parameters p,brf_param_values v
3 where p.param_id = v.param_id
4* order by p.brf_id,p.seq,p.param_id
SQL> /
SEQ BRF_ID BRF_CODE COLUMN_NAME CONDITION VALUE1 VALUE2
1 BRF01 BRF010001 CURRENCY IN AED
1 BRF01 BRF010001 GLNO IN 0010
2 BRF01 BRF010002 GLNO IN 0010
3 BRF01 BRF010006 GLNO IN 0030
4 BRF01 BRF010008 RESIDENCE IN AE
4 BRF01 BRF010008 GLNO IN 0040
5 BRF01 BRF010009 GLNO IN 0040
6 BRF01 BRF010004 GLNO IN 0050
6 BRF01 BRF010004 CATEGORY IN 5001
6 BRF01 BRF010004 DR_CR IN Debit
7 BRF01 BRF010049 DR_CR IN Debit For genarating the report BRF01 :
Updation will be done based on each "SEQ" value shown above. So..
Ist update: (Generated from SEQ = 1 for BRF_ID = BRF01)
update crb_output_view_tab set new_brf_code_column = 'BRF010001'
where currency in ('AED') and GLNO in ('0010');
--This is sample. it will be like : GLNO in ('0010','0040','0056'.....)IInd update: (Generated from SEQ = 2 for BRF_ID = BRF01)
update crb_output_view_tab set new_brf_code_column = 'BRF010002'
where GLNO in ('0010')
and new_brf_code_column is null;..... And so on.
The data in the "new_column" is required for report generation.
After all the updations the report "BRF01" (through UTL_FILE) will be generated.
After that updation for BRF02 will be performed and report will be generated.
I have to store the data for all BRF01,BRF02.. because these are used for online reports which are accessed from Discoverer also.
What changes are you making that has lead to you considering nested tables or child tables? I have explained in detailed manner above. (Please revert if not clear)
Why would a detail table be 37 times as big as one table? I was thinking of adding a PK to "crb_output_view_tab" and a new detail table like
pk_from_crb_output_view_tab NUMBER
brf_id VARCHAR2
brf_code VARCHAR2Now here the number of rows will be 37* number_of_rows_in_crb_output_view_tab
Big in which dimension(s)? I have to join the master table and new_detail_table for each report generation
What drives the design - the updating of REPORTS_MASTER or the application which uses the updated data?The time for updation and daily generation of 37 reports (We can ignore the performance of discoverer reports)
Thanks,
Jeneesh -
hi,
can anyone tell me where can i find the last time and date at which a table is accessed, ie., data is being selected or modified from a table.The last change date of a record in a standard table will usually be in a field ERDAT, if one exists.
-
Z- T code usage log table.
hi All,
I am looking for T-code or program usage log standard table to fulfill my requirement.
Like I can get these values username, run date, run time etc.
-Thanks
AmitHello Amit,
although it was not specifically meant for this purpose, transaction STAD is an alternative how you can check the usage of a Z-transaction.
Usage of transaction STAD is:
enter the starting date / time
enter the length which indicates how long the server should be analysed from the starting date / time
in field "Transaction" enter the transaction name you need to check
simply press <enter>
Transaction STAD contains a huge amount of information about program statistics, so the information won't be kept here for long. However, the amount of storable program statistics should be raised only with care, as it can give a huge additional load on the server.
Best regards,
Laszlo -
Livecache data cache usage - table monitor_caches
Hi Team,
We have a requirement of capturing the Data cache usage of Livecache on an hourly basis.
Instead of doing it manually by going into LC10 and copying the data into an excel, is there a table which captures this data on a periodic basis which we can use to get the report at a single shot.
"monitor_caches" is one table which holds this data, but we are not sure how we can get the data from this table. Also, we need to see the contents of this table, we are not sure how we can do that.
As "monitor_caches" is a maxdb table I am not sure how I can the data from this table. I have never worked on Maxdb before.
Has anyone had this requirement.
Warm Regards,
VenuHi,
For Cache usage below tables can be referred
Data Cache Usage - total (table MONITOR_CACHES)
Data Cache Usage - OMS Data (table MONITOR_CACHES)
Data Cache Usage - SQL Data (table MONITOR_CACHES)
Data Cache Usage - History/Undo (table MONITOR_CACHES)
Data Cache Usage - OMS History (table MONITOR_CACHES)
Data Cache Usage - OMS Rollback (table MONITOR_CACHES)
Out Of Memory Exceptions (table SYSDBA.MONITOR_OMS)
OMS Terminations (table SYSDBA.MONITOR_OMS)
Heap Usage (table OMS_HEAP_STATISTICS)
Heap Usage in KB (table OMS_HEAP_STATISTICS)
Maximum Heap Usage in KB (table ALLOCATORSTATISTICS)
System Heap in KB (table ALLOCATORSTATISTICS)
Parameter OMS_HEAP_LIMIT (KB) (dbmrfc command param_getvalue OMS_HEAP_LIMIT)
For reporting purpose , look into the following BW extractors and develop BW report.
/SAPAPO/BWEXDSRC APO -> BW: Data Source - Extractor
/SAPAPO/BWEXTRAC APO -> BW: Extractors for Transactional Data
/SAPAPO/BWEXTRFM APO -> BW: Formula to Calculate a Key Figure
/SAPAPO/BWEXTRIN APO -> BW: Dependent Extractors
/SAPAPO/BWEXTRMP APO -> BW: Mapping Extractor Structure Field
Hope this helps.
Regards,
Deepak Kori -
Hi all,
While Entering Strategy and Usage in T-code CS02 for Item level Materials Table STPO gets updated with the values what we entered in the CS02 screen.
I need to know which Table Holds the Calculation Value of Strategy and Usage ?
EX:Strategy - 1 (Manual Entry)
Usage - 2
Which Table Stores the Required Quantity? For Example,if the demand is 100 and usage is 2 the required is 2%of100 = 2.Before you hit save in your transaction start an SQL trace (transaction ST05) in a different session. Now hit save and wait until all update tasks are finished. Don't do anything else until you deactivate the trace again. When you now look at the SQL trace it will list all the tables that have been updated on the DB during SAVE. You should find the table you are looking for in that list.
It is also possible that it doesn't store the required qty. at all but calculates it at runtime, then this is what you would have to do as well.
Hope that helps,
Michael -
Hi could someone help me ...
for the following query...
SELECT returnflag
, linestatus
, sum(quantity) as sum_qty
, sum(extendedprice) as sum_base_price
, sum(extendedprice * (1 - iscount)) as sum_disc_price
, sum(extendedprice * (1 - iscount) * (1 + l_tax)) as sum_charge
, avg(quantity) as avg_qty, avg(extendedprice) as avg_price
, avg(iscount) as avg_disc, count(*) as count_order
FROM lineitem
WHERE shipdate <= date '1998-12-01'
GROUP BY returnflag, linestatus
ORDER BY returnflag, linestatus;
Explain Plan :
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 135 | 299K (2)| 01:00:00 |
| 1 | SORT GROUP BY | | 5 | 135 | 299K (2)| 01:00:00 |
|* 2 | TABLE ACCESS FULL| LINEITEM | 57M| 1493M| 297K (1)| 00:59:35 |
Predicate Information (identified by operation id):
2 - filter("L_SHIPDATE"<=TO_DATE(' 1998-09-02 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))
would any parts of this query be carried out on the temporary table space (sums and averages? / sort group by?) ... or what impact would the temp table space have when running this query?
any help would be appriciatedim using 11g r2,
what about usage of temporary table space (approximatly) with the following ..
============================================
create view revenue (supplier_no, total_revenue) as
select
suppkey, sum(extendedprice * (1 - iscount))
from
lineitem
where
shipdate >= date '1994-12-01'
group by
suppkey;
select suppkey, name, address, phone, total_revenue
from supplier, revenue
where suppkey = supplier_no and total_revenue = (select max(total_revenue)from revenue)
order by suppkey;
==========================================================
View created.
Execution Plan
Plan hash value: 2790460729
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
Time |
| 0 | SELECT STATEMENT | | 50000 | 5126K| | 381K (1)|
01:16:23 |
| 1 | MERGE JOIN | | 50000 | 5126K| | 381K (1)|
01:16:23 |
| 2 | SORT JOIN | | 100K| 3027K| | 380K (1)|
01:16:09 |
|* 3 | VIEW | REVENUE | 100K| 3027K| | 380K (1)|
01:16:09 |
| 4 | WINDOW BUFFER | | 100K| 2148K| | 380K (1)|
01:16:09 |
| 5 | HASH GROUP BY | | 100K| 2148K| 1207M| 380K (1)|
01:16:09 |
|* 6 | TABLE ACCESS FULL| LINEITEM | 34M| 733M| | 297K (1)|
00:59:35 |
|* 7 | SORT JOIN | | 50000 | 3613K| 8712K| 1182 (1)|
00:00:15 |
| 8 | TABLE ACCESS FULL | SUPPLIER | 50000 | 3613K| | 308 (1)|
00:00:04 |
Predicate Information (identified by operation id):
3 - filter("TOTAL_REVENUE"="ITEM_1")
6 - filter("L_SHIPDATE">=TO_DATE(' 1994-12-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
7 - access("S_SUPPKEY"="SUPPLIER_NO")
filter("S_SUPPKEY"="SUPPLIER_NO")
Statistics
=========================================================
presumably temp table space will be used for sort and merge joins? .. but how does temp table space work with the view? -
Hi Experts,
I have installed EBS TCA connector on my OIM 11g installation. I am able to create user through OIM into EBS. But Some of the values are not populated into HZ_Parties Table
1. Email address is pushed into FND_USERS table but not HZ_PARTIES table. (While CREATE process)
2. Is connector creates account in HZ_Relationships table? If so how to configure it?
3. I tried to update User's first name or last name got following error:
11d1def534ea1be0:3e12685:12df1da5de3:-7ffd-00000000000011f6] java.sql.SQLException: The number of parameter names does not match the number of registered praremeters[[
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:74)
I appreciate your help.
thanks in advance
RobbI think you are missing entries in Lookup.EBS.UM.UserTCAProvisioning
These entries should be there
UD_EBST_USR_DESCR x_description,8,varchar2,IN
UD_EBST_USR_EFFDATEFROM x_start_date,5,date,IN
UD_EBST_USR_EFFDATETO x_end_date ,6,date,IN
UD_EBST_USR_EMAIL x_email_address ,14,varchar2,IN
UD_EBST_USR_FAX x_fax,15,varchar2,IN
UD_EBST_USR_PASSWORD x_unencrypted_password,3,varchar2,IN
UD_EBST_USR_USRNAME x_user_name ,1,varchar2,IN
Thanks,
Nayan
Maybe you are looking for
-
Auto-populate checkboxes in a form
I'm working on a form in ApEx that has several checkbox fields on it, and am having some trouble getting them to auto-populate. Based on a selection on the welcome page of this application, certain checkboxes should auto-populate so that users should
-
Adobe creative suite 5.5 design premium upgrade
I am installing adobe creative suite 5.5 design premium upgrade on a new laptop. I installed the upgrade and entered the serial number and received the green check. It never ask for a prevoius serial number from the previous product. Each time I o
-
Hi, I'm struggle. Any suggestion? [felipe@felipe-pc tmp]$ cat /etc/*-release Enterprise Linux Enterprise Linux Server release 5.4 (Carthage) Red Hat Enterprise Linux Server release 5.4 (Tikanga) [felipe@felipe-pc tmp]$ cat /proc/version Linux version
-
Long Running VF11: Cancel Invoice
We have a table used for GL's that was converted from pooled to transaparent in 2003. There were tools and access keys used for creating this Z table related to the GL. The table has over 24,000,000 recs in it and causes VF11 to run for an hour to ca
-
Can't convert PDF to Word Document!
I am not able to convert PDF docs to Word. I've tried the PDF conversion program and upgraded to the PDFPak for $89.99. Neither one of them work. What do I need to do?