Accessing partitioned table thru remote databse link
Dear All
I want to use create table command in my database .I want access partitioned table of remote datbase thru database link.Pl let me know the scripts.
Thanks
Parth
NB : I was using the foll but actually the data given is wrong.
select * from Data@prod "partition(part_feb03)";
Don't able use partition table with the database link, sorry
Regards
Hector
Similar Messages
-
Accessing large partitioned tables over a database link - any gotchas?
Hi,
We are in the middle of a corporate acquisition and I have a question about using database links to efficiently access large tables. There are two geographically distinct database instances, both on Oracle 10.2.0.5 sitting on Linux boxes.
The primary instance (PSHR) contains a PeopleSoft HR and Payroll system and sits in our data centre.
The secondary instance (HGPAY) runs a home grown payroll application and sits in a different data centre to PSHR.
The requirement is to allow PeopleSoft (PSHR) to display targeted (one employee at a time) payroll data from the secondary instance.
For example in HGPAY
CREATE TABLE MY_PAY_DATA AS
SELECT TO_CHAR(A.RN, '00000000') "EMP" -- This is an 8 digit leading 0 unique identifier
, '20110' || to_char(B.RN) "PAY_PRD" -- This is a format of fiscal year plus fortnight in year (01-27)
, C.SOME_KEY -- This is the pay element being considered - effectively random
, 'XXXXXXXXXXXXXXXXX' "FILLER1"
, 'XXXXXXXXXXXXXXXXX' "FILLER2"
, 'XXXXXXXXXXXXXXXXX' "FILLER3"
FROM ( SELECT ROWNUM "RN" FROM DUAL CONNECT BY LEVEL <= 300) A
, (SELECT ROWNUM "RN" FROM DUAL CONNECT BY LEVEL <= 3) B
, (SELECT TRUNC(ABS(DBMS_RANDOM.RANDOM())) "SOME_KEY" FROM DUAL CONNECT BY LEVEL <= 300) C
ORDER BY PAY_PRD, EMP
HGPAY.MY_PAY_DATA is Range Partitioned on EMP (approx 300 employees per partition) and List Sub-Partitioned on PAY_PRD (3 pay periods per sub-partition). I have limited the create statement above to represent one sub-paritition of data.
On average each employee generates 300 rows in this table each pay period. The table has approx 180 million rows and growing every fortnight.
In PSHR
CREATE VIEW PS_HG_PAY_DATA (EMP, PAY_PRD, SOME_KEY, FILLER1, FILLER2, FILLER3)
AS SELECT EMP, PAY_PRD, SOME_KEY, FILLER1, FILLER2, FILLER3 FROM MY_PAY_DATA@HGPAY
PeopleSoft would then generate SQL along the lines of
SELECT * FROM PS_HG_PAY_DATA WHERE EMP = ‘00002561’ AND PAY_PRD = ‘201025’
The link between the data centres where PSHR and HGPAY sit is not the best in the world, but I am expecting tens of access requests per day rather than thousands, so I believe the link should have sufficient bandwidth to meet the requirements.
I have tried a quick test on two production sized test instances and it works in that it presents the data, when I look at the explain plan I can see that the remote database is only presenting the relevant sub-partition over to PSHR rather than the whole table. Before I pat myself on the back with a "job well done" - is there a gotcha that I am missing in using dblink to access partitioned big tables?Yes, that's about right. A lot of this depends on exactly what happens in various "oops" scenarios-- are you, for example, just burning some extra CPU until someone comes to the DBA and says "my query is slow" or does saturating the network have some knock-on effect on critical apps or random long-running queries prevent some partition maintenance operations.
In my mind, the simplest possible solution (assuming you are using a fixed username in the database link) would be to create a profile on HGPAY for the user that is defined for the database link that set a LOGICAL_READS_PER_CALL value that was large enough to handle any "reasonable" request and low enough to quickly kill any session that tried to do something "stupid". Obviously, you'd need to define "stupid" in your environment particularly where the scope of a "simple reconciliation report" is undefined. If there are no political issues and you can adjust the profile values over time as you encounter new reports that slowly increase what is deemed "reasonable" this is likely the simplest approach. If you've got to put in a change request to change the setting that has to be reviewed by the change control board at its next quarterly meeting with the outsourced DBA vendor, on the other hand, you could turn a 30 minute report into 30 hours of work spread over 30 days. In the ideal world, though, that's where I'd start.
Getting more complex, you can use Resource Manager to kill queries that run too long on the wall clock. Since the network is almost certainly going to be the bottleneck, it's probably unlikely that the CPU throttling is going to do much good-- you can probably saturate the network with a very small amount of CPU. Network throttling in my mind is an extra step up in complexity again depending on the specifics of your particular situation and what you're competing with.
Justin -
ACCESSING MULTIPLE TABLES THRU ONE SELECT STATEMENTS
How to access multiple tables through one single select statement and also using where condition in it for multiple fields which are from different tables. please give me any example from any tables ....thanks in advance
See the below example code :
REPORT ZMM_COST no standard page heading
line-size 255
message-id zwave .
type-pools
type-pools : slis.
Tables
tables : mara,
makt,
mbew,
konp,
pgmi,
marc,
RMCP3,
sscrfields,
mvke.
Internal Table for MARC and MARA
data : begin of i_join occurs 0,
matnr like mara-matnr, " Material #
meins like mara-meins, " Unit of Measure
werks like marc-werks, " Plant
zzdept like marc-zzdept," Department
end of i_join.
Internal table for PGMI
data : begin of i_pgmi occurs 0,
werks like pgmi-werks, " Plant,
nrmit like pgmi-nrmit, " Material #
wemit like pgmi-wemit, " Plant
end of i_pgmi.
Internal Table for MBEW
data i_mbew like mbew occurs 0 with header line.
Internal Table for Output
data : begin of i_output occurs 0 ,
matnr like mara-matnr, " Material #
maktx like makt-maktx, " Material Desc
VPRSV like mbew-VPRSV, " Price Control Indicator
VERPR like mbew-VERPR, " Moving Avg Price
meins like mara-meins, " Base Unit of Measure
STPRS like mbew-STPRS, " Standard Price
LPLPR like mbew-LPLPR, " Current Planned Price
ZPLPR like mbew-ZPLPR, " Future Planned Price
VPLPR like mbew-VPLPR, " Previous Planned Price
kbetr like konp-kbetr, " Sales Price
KMEIN like konp-KMEIN, " Sales Unit
margin(5) type p decimals 2,
vmsta like mvke-vmsta, " Material Status.
end of i_output.
Internal Table for A004
data : i_a004 like a004 occurs 0 with header line.
Variables
data : wa_lines type i,
wa_maktx type makt-maktx,
v_flag type c.
ALV Function Module Variables
DATA: g_repid like sy-repid,
gs_layout type slis_layout_alv,
g_exit_caused_by_caller,
gs_exit_caused_by_user type slis_exit_by_user.
DATA: gt_fieldcat type slis_t_fieldcat_alv,
gs_print type slis_print_alv,
gt_events type slis_t_event,
gt_list_top_of_page type slis_t_listheader,
g_status_set type slis_formname value 'PF_STATUS_SET',
g_user_command type slis_formname value 'USER_COMMAND',
g_top_of_page type slis_formname value 'TOP_OF_PAGE',
g_top_of_list type slis_formname value 'TOP_OF_LIST',
g_end_of_list type slis_formname value 'END_OF_LIST',
g_variant LIKE disvariant,
g_save(1) TYPE c,
g_tabname_header TYPE slis_tabname,
g_tabname_item TYPE slis_tabname,
g_exit(1) TYPE c,
gx_variant LIKE disvariant.
data : gr_layout_bck type slis_layout_alv.
Selection-screen
selection-screen : begin of block blk with frame title text-001.
parameters : p_werks like marc-werks default '1000' obligatory.
select-options : s_dept for marc-zzdept obligatory,
s_matnr for mara-matnr,
s_mtart for mara-mtart,
s_vprsv for mbew-VPRSV,
s_PRGRP for RMCP3-PRGRP MATCHCODE OBJECT MAT2 ,
s_vmsta for mvke-vmsta.
selection-screen: end of block blk.
*SELECTION-SCREEN BEGIN OF BLOCK b3 WITH FRAME TITLE text-003.
*PARAMETERS: p_vari LIKE disvariant-variant.
*SELECTION-SCREEN END OF BLOCK b3.
At slection screen events *
*-- Process on value request
*AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_vari.
PERFORM f4_for_variant.
Initialization *
Initialization.
g_repid = sy-repid.
sscrfields-functxt_01 = 'Clear Selection'.
selection-screen function key 1.
AT SELECTION-SCREEN.
case sscrfields-ucomm.
when 'Clear Selection' or 'FC01'.
clear: s_matnr,
p_werks.
refresh: s_matnr,
s_dept,
s_mtart,
s_vprsv,
s_PRGRP,
s_vmsta.
endcase.
Start-of-selection.
start-of-selection.
Clear the all data.
perform clear_data.
Get the data from PGMI Table
perform get_pgmi.
Get the data from MARC and MARA Table
perform get_mara_marc.
Get the data from MBEW Table
perform get_mbew.
Move the data into OUTPUT Table
perform move_output_internal.
*end-of-selection.
end-of-selection.
if not i_output[] is initial.
ALV Function Module
perform print_alv.
endif.
*& Form get_pgmi
Select the data from PGMI Table
FORM get_pgmi.
clear v_flag.
If Product group has a value at Selection-screen.
if not s_prgrp is initial.
select werks nrmit wemit from pgmi into table i_pgmi
where prgrp in s_prgrp
and werks = p_werks
and wemit = p_werks.
v_flag = 'X'.
endif.
ENDFORM. " get_pgmi
*& Form get_mara_marc
Select the data from MARA and MARC
FORM get_mara_marc.
if v_flag = 'X'.
select amatnr ameins bwerks bzzdept into table i_join
from mara as a inner join marc as b on amatnr = bmatnr
for all entries in i_pgmi
where a~matnr in s_matnr
and b~werks = p_werks
and b~zzdept in s_dept
and a~mtart in s_mtart
and a~matnr = i_pgmi-nrmit
and b~werks = i_pgmi-werks.
else.
Get the data from MARA and MARC Table
select amatnr ameins bwerks bzzdept into table i_join
from mara as a inner join marc as b on amatnr = bmatnr
where a~matnr in s_matnr
and b~werks = p_werks
and b~zzdept in s_dept
and a~mtart in s_mtart.
endif.
clear wa_lines.
describe table i_join lines wa_lines.
if wa_lines is initial.
message i000(zwave) with 'List contains no data'.
stop.
endif.
sort i_join by matnr werks zzdept.
ENDFORM. " get_mara_marc
*& Form get_mbew
Select the data from MBEW Table
FORM get_mbew.
Get the data from MBEW.
select * from mbew into table i_mbew
for all entries in i_join
where matnr = i_join-matnr.
clear wa_lines.
describe table i_mbew lines wa_lines.
if wa_lines is initial.
message i000(zwave) with 'List contains no data'.
stop.
endif.
sort i_mbew by matnr bwkey.
ENDFORM. " get_mbew
*& Form move_output_internal
Final Results
FORM move_output_internal.
loop at i_join.
clear wa_maktx.
Compare the data with MVKE Table
select single vmsta from mvke into mvke-vmsta
where matnr = i_join-matnr
and vkorg = '0001'
and vtweg = '01'
and vmsta in s_vmsta.
if sy-subrc ne 0.
continue.
else.
i_output-vmsta = mvke-vmsta.
endif.
read table i_mbew with key matnr = i_join-matnr
bwkey = i_join-werks
binary search.
if sy-subrc eq 0.
Price Control Indicator
i_output-VPRSV = i_mbew-VPRSV.
Moving Average Price
i_output-VERPR = i_mbew-VERPR / i_mbew-peinh.
Standard Price
i_output-STPRS = i_mbew-STPRS / i_mbew-peinh.
Current Planned Price
i_output-LPLPR = i_mbew-LPLPR / i_mbew-peinh.
Future Planned Price
i_output-ZPLPR = i_mbew-ZPLPR / i_mbew-peinh.
Previous Planned Price
i_output-VPLPR = i_mbew-VPLPR / i_mbew-peinh.
Base Unit of Measure - Added by Seshu 01/09/2007
i_output-meins = i_join-meins.
else.
continue.
endif.
Get the sales Price.
perform get_sales_data.
if i_mbew-VPRSV = 'V'.
Get the Percentage of Margin
if i_output-kbetr ne '0.00'.
i_output-margin = ( ( i_output-kbetr - i_mbew-VERPR )
/ i_output-kbetr ) * 100 .
endif.
else.
Get the Percentage of Margin
if i_output-kbetr ne '0.00'.
i_output-margin = ( ( i_output-kbetr - i_output-stprs )
/ i_output-kbetr ) * 100 .
endif.
endif.
Get the material Description from MAKT Table
select single maktx from makt into wa_maktx
where matnr = i_join-matnr
and spras = 'E'.
if sy-subrc eq 0.
i_output-matnr = i_join-matnr.
i_output-maktx = wa_maktx.
endif.
append i_output.
clear : i_output,
i_join,
i_mbew.
endloop.
ENDFORM. " move_output_internal
*& Form get_sales_data
Get the Sales Price for each material
FORM get_sales_data.
Get the data from A004 table to get KNUMH
Added new field Sales Unit - Seshu 01/09/2006
refresh : i_a004.
clear : i_a004.
data : lv_kbetr like konp-kbetr," Condition value
lv_KPEIN like konp-kpein , "per
lv_KMEIN like konp-KMEIN. " Sales Unit
select * from a004 into table i_a004
where matnr = i_join-matnr
and vkorg = '0001'
and vtweg = '01'.
if sy-subrc eq 0.
sort i_a004 by DATAB descending.
Get the Latetest Date
read table i_a004 with key matnr = i_join-matnr
vkorg = '0001'
vtweg = '01'
binary search.
Get the Sales Value
select single kbetr KPEIN KMEIN from konp
into (lv_kbetr,lv_KPEIN, lv_KMEIN)
where knumh = i_a004-knumh
and kappl = i_a004-kappl
and kschl = i_a004-kschl.
if sy-subrc eq 0.
i_output-kbetr = lv_kbetr / lv_KPEIN.
i_output-KMEIN = lv_KMEIN.
endif.
endif.
clear : lv_kbetr,
lv_kpein,
lv_KMEIN.
ENDFORM. " get_sales_data
*& Form print_alv
ALV Function Module
FORM print_alv.
Fill the Fiedlcat
PERFORM fieldcat_init using gt_fieldcat[].
gr_layout_bck-edit_mode = 'D'.
gr_layout_bck-colwidth_optimize = 'X'.
CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
EXPORTING
I_INTERFACE_CHECK = ' '
I_BYPASSING_BUFFER =
I_BUFFER_ACTIVE = ' '
I_CALLBACK_PROGRAM = g_repid
I_CALLBACK_PF_STATUS_SET = ' '
I_CALLBACK_USER_COMMAND = g_user_command
I_CALLBACK_TOP_OF_PAGE = ' '
I_CALLBACK_HTML_TOP_OF_PAGE = ' '
I_CALLBACK_HTML_END_OF_LIST = ' '
I_STRUCTURE_NAME =
I_BACKGROUND_ID = ' '
I_GRID_TITLE =
I_GRID_SETTINGS =
IS_LAYOUT = gr_layout_bck
IT_FIELDCAT = gt_fieldcat[]
IT_EXCLUDING =
IT_SPECIAL_GROUPS =
IT_SORT =
IT_FILTER =
IS_SEL_HIDE =
I_DEFAULT = 'X'
I_SAVE = g_save
IS_VARIANT =
IT_EVENTS =
IT_EVENT_EXIT =
IS_PRINT =
IS_REPREP_ID =
I_SCREEN_START_COLUMN = 0
I_SCREEN_START_LINE = 0
I_SCREEN_END_COLUMN = 0
I_SCREEN_END_LINE = 0
IT_ALV_GRAPHICS =
IT_ADD_FIELDCAT =
IT_HYPERLINK =
I_HTML_HEIGHT_TOP =
I_HTML_HEIGHT_END =
IT_EXCEPT_QINFO =
IMPORTING
E_EXIT_CAUSED_BY_CALLER =
ES_EXIT_CAUSED_BY_USER =
TABLES
T_OUTTAB = i_output
EXCEPTIONS
PROGRAM_ERROR = 1
OTHERS = 2
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDFORM. " print_alv
*& Form fieldcat_init
Fieldcat
FORM fieldcat_init USING e01_lt_fieldcat type slis_t_fieldcat_alv.
DATA: LS_FIELDCAT TYPE SLIS_FIELDCAT_ALV.
Material #
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'MATNR'.
LS_FIELDCAT-ref_fieldname = 'MATNR'.
LS_FIELDCAT-ref_tabname = 'MARA'.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Material'.
ls_fieldcat-seltext_M = 'Material'.
ls_fieldcat-seltext_S = 'Material'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
Material Description
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'MAKTX'.
LS_FIELDCAT-OUTPUTLEN = 35.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Description'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
Price Indicator
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'VPRSV'.
LS_FIELDCAT-OUTPUTLEN = 7.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Price Control Indicator'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
Moving Avg Price
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'VERPR'.
LS_FIELDCAT-OUTPUTLEN = 11.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Moving Avg Price'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
Base Unit of Measure
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'MEINS'.
LS_FIELDCAT-OUTPUTLEN = 7.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Base Unit'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
Standard Price
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'STPRS'.
LS_FIELDCAT-OUTPUTLEN = 11.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Standard Price'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
Current Planned Price
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'LPLPR'.
LS_FIELDCAT-OUTPUTLEN = 11.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Current Planned Price'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
Future Planned Price
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'ZPLPR'.
LS_FIELDCAT-OUTPUTLEN = 11.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Future Planned Price'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
Previous Planned Price
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'VPLPR'.
LS_FIELDCAT-OUTPUTLEN = 11.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Previous Planned Price'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
Sales Price
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'KBETR'.
LS_FIELDCAT-OUTPUTLEN = 13.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Sales Price'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
Sales Unit
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'KMEIN'.
LS_FIELDCAT-OUTPUTLEN = 7.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Sales Unit'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
% of Gross Margin
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'MARGIN'.
LS_FIELDCAT-OUTPUTLEN = 13.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = '% of Gross Margin'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
Material Status
CLEAR LS_FIELDCAT.
LS_FIELDCAT-FIELDNAME = 'VMSTA'.
LS_FIELDCAT-OUTPUTLEN = 13.
LS_FIELDCAT-TABNAME = 'I_OUTPUT'.
ls_fieldcat-seltext_L = 'Material Status'.
APPEND LS_FIELDCAT TO E01_LT_FIELDCAT.
ENDFORM. " fieldcat_init
**& Form f4_for_variant
text
*FORM f4_for_variant.
CALL FUNCTION 'REUSE_ALV_VARIANT_F4'
EXPORTING
is_variant = g_variant
i_save = g_save
i_tabname_header = g_tabname_header
i_tabname_item = g_tabname_item
it_default_fieldcat =
IMPORTING
e_exit = g_exit
es_variant = gx_variant
EXCEPTIONS
not_found = 2.
IF sy-subrc = 2.
MESSAGE ID sy-msgid TYPE 'S' NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ELSE.
IF g_exit = space.
p_vari = gx_variant-variant.
ENDIF.
ENDIF.
*ENDFORM. " f4_for_variant
*& Form clear_data
Clear the Internal table
FORM clear_data.
clear : i_output,
i_join,
i_mbew,
i_a004,
i_pgmi.
refresh : i_output,
i_join,
i_mbew,
i_a004,
i_pgmi.
ENDFORM. " clear_data
FORM USER_COMMAND *
FORM user_command USING r_ucomm LIKE sy-ucomm
rs_selfield TYPE slis_selfield. "#EC CALLED
CASE R_UCOMM.
WHEN '&IC1'.
read table i_output index rs_selfield-tabindex.
SET PARAMETER ID 'MAT' FIELD i_output-matnr.
SET PARAMETER ID 'WRK' FIELD p_werks.
if not i_output-matnr is initial.
call transaction 'MD04' and skip first screen.
endif.
ENDCASE.
ENDFORM.
Reward Points if it is helpful
Thanks
Seshu -
How to test issue with accessing tables over a DB link?
Hey all,
Using 3.1.2 on XE, I have a little app. The database schema for this app only contains views to the actual tables, which happen to reside over a database link in a 10.1.0.5.0 DB.
I ran across an issue where a filter I made on a form refused to work (see [this ApEx thread| http://forums.oracle.com/forums/message.jspa?messageID=3178959] ). I verified that the issue only happens when the view points to a table across a DB link by recreating the table in the local DB and pointing the view to it. When I do this, the filter works fine. When I change the view back to use the remote table, it fails. And it only fails in the filter -- every other report and every other tool accessing the remote table via the view works fine.
Anyone know how I can troubleshoot this? For kicks, I also tried using a 10.2.0.3.0 DB for the remote link, but with the same results.
TIA,
Rich
Edited by: socpres on Mar 2, 2009 3:44 PM
Accidental save...ittichai wrote:
Rich,
I searched metalink for your issue. This may be a bug in 3.1 which will be fixed in 4.0. Please see Doc ID 740581.1 Database Link Tables Do NoT Show Up In Table Drop Down List In Apex. There is a workaround mentioned in the document.
I'm not sure why I never thought of searching MetaLink, but thanks for the pointer! It doesn't match my circumstances, however. The Bug smells like a view not being queried in the APEX development tool itself -- i.e. the IDE's coding needs changing, not necessarily those apps created with the IDE.
I'm working on getting you access to my hosted app...
Thanks,
Rich -
Remote Delta link setup problem with Bean / Access Service
Hello,
I am trying to setup Remote Delta Link (RDL) between two portals. (Both portals same version - EP 7.0 EHP1 SP 05 - are in the same domain)
I already have the Remote Role Assignment working without any issues.
The following have been done successfully:
1. Same user repository has been setup for both the portals
2. Setup trust between producer and consumer (SSO working fine)
3. Producer added and registered succesfully on consumer
4. Permissions setup on producer and consumer
4. pcd_service user with required UME actions setup
I am able to see all the remote content in the Consumer portal.
When I try to copy the remote content and paste it as local content, I am getting the following error:
Could not create remote delta link to object 'page id'. Could not connect to the remote portal. The remote portal may be down, there may be a network problem, or your connection settings to the remote portal may be configured incorrectly.
After increasing the log severity, I am able to see the following in Default Trace:
com.sap.portal.fpn.transport.Trying to lookup access service (P4-RMI) for connecting to producer 'ess_int' with information: com.sap.portal.fpn.remote.AccessServiceInformation@31c92207[connectionURL=hostname.mycompany.com:50004, shouldUseSSL=false, RemoteName=AccessService]
com.sap.portal.fpn.transport.Unable to lookup access service (P4-RMI) with information: com.sap.portal.fpn.remote.AccessServiceInformation@31c92207[connectionURL=hostname.mycompany.com:50004, shouldUseSSL=false, RemoteName=AccessService]
AbstractAdvancedOperation.handleDirOperationException
[EXCEPTION]
com.sap.portal.pcm.admin.exceptions.DirOperationFailedException: Could not retrieve the bean / access service to connect with producer
Could not retrieve the bean / access service to connect with producer
Like you can see above, there is some bean / access service which is not retrieved successfully. I am not sure if this is a permission problem on the consumer.
I have checked that the P4 ports are configured correctly (standard - not changed) and I am able to telnet from producer to consumer (and vice versa) on the P4 port.
I am stuck at this point and am not able to find any information on this.
I would really appreciate if some one can point me in the right direction.
Thank you for reading.
- RajHi Raj,
Please check your config of the P4 port on the producer. Is it really 50004 (check SystemInfo of the producer)?
I do think there's a problem with the P4 communication since RDL requires P4 connection.
Do you have load balanced consumer-producer connection? Please refer to this blog for further details
Little known ways to create a load balanced Consumer Producer connection in a FPN scenario
Regards,
Dao -
Linking Java to Access Database tables
Hello,
I need use JCreator to link Java to Access Database tables.
Could any one tell me what kind of drivers I need use?
Also, where could I find the examples of linking Java to Access Database tables?
Thank you,
DanielThanks.
I have read the tutorial and downloaded the sample code from the web http://java.sun.com/docs/books/tutorial/jdbc/
In the CreateCoffee.java programming, I made the following changes:
// String url = "jdbc:mySubprotocol:myDataSource";
String url = "jdbc:odbc:DB1";
DB1 is an Access Database file located in the same folder as CreateCoffee.java.
//Class.forName("myDriver.ClassName");
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
Every time when I run the CreateCoffee.java programming, it shows the following running error:
SQLException:[Microsoft][ODBC Driver Manager]
Data source name not found and no default driver specified.
Could any one have any suggestions for solving the above problem?
Thank you,
Daniel -
Can't use filter when accessing hana table through link from another table in sapui5
Hi all,
Have a strange one and I was wondering if someone had come across this before.
Hana table structure
entity SalesOrder {
key element name : String;
element contact : Association[0..*] to Contact via backlink order;
entity Contact {
element location: Location;
element order : Association to SalesOrder;
==
each sales order has a number of contacts.
In javascript, if I bind on list of contacts directly then I can use the filter
oRowRepeater.bindRows("/Contact",oRowTemplate,null,[new sap.ui.model.Filter("location",sap.ui.model.FilterOperator.EQ, "Germany")]);
and if I enter link i returns the number of entries
Contact/$count?$filter=location eq 'Germany'
But if I get contacts through the sales order it doesn't allow me to use the filter,
oRowRepeater.bindRows("/SalesOrder('mike')/Contact",oRowTemplate,null,[new sap.ui.model.Filter("location",sap.ui.model.FilterOperator.EQ, "Germany")]);
I can return the contacts same as accessing them directly and i can get the information such as
new sap.ui.commons.TextView({text: "{location}"}) same as before but only difference is that i can't filter.
When I try the following I get error
/SalesOrder('mike')/Contact/$count?$filter=location eq 'Germany'
Error
"message": {
"lang": "en-US",
"value": {
"type": "ODataInputError",
"message": "Bad Request URL: U"
When I open Chrome Developer Tools I see the following error
SalesOrder/contact/$count?$filter=location%20eq%20%27EMEA%27 400 (Bad Request)
Basically the $count doesn't work if you access contacts indirectly even though the link is there.
Is this a bug or am I doing something wrong?
Hana is sp8, I am up to date with almost everything.
Many thanks,
Matthewfor the reading/writing from oracle to access, you could do something like this:
for table in oracle/access with fields of id(number), name(varchar/text):
Connection oconn = //connect to Oracle
Connection aconn = //connect to access
//create the insert statement into access table
PreparedStatement apstmt = aconn.prepareStatement("INSERT INTO accesstable (id, name) values (?,?)");
//select all the results neeed from oracle table
Statement ostmt = oconn.createStatement();
ResultSet rs = ostmt.executeQuery("select id, name from oracletable where...");
while ( rs.next() )
apstmt.setLong(1, rs.getLong(1));
apstmt.setString(2, rs.getString(2));
apstmt.addBatch();
//if your driver doesn't support batching, just use this:
//apstmt.executeUpdate();
//use this only if you use batch statements
apstmt.executeBatch();
rs.close();
stmt.close();
apstmt.close();
oconn.close();
aconn.close(); -
TIME_OUT in SAP when accessing external Oracle table thru native SQL
Hi,
I have a problem in one of my native SQL statement. It takes a long time accessing the table considering that the number of records to be retrieved is only small.
Something happened on the Oracle system. But, stilll to be confirmed. Before, there wasn't any issue. Looking further, I found that when the value in the where clause equated is a literal (meaning the value is not declared in DATA or CONSTANTS in the ABAP Program)
Example
1.
EXEC.
WHERE FIELD = '1'
ENDEXEC.
instead of
2.
CONSTANTS: c_1 value '1'.
EXEC.
WHERE FIELD = c_1
ENDEXEC.
i found that when the way of coding is same as Example 1, a time-out error occurs. But when in Example 2, no issue.
Can someone explain this? is their something that could affect SAP with respect to Oracle configuration?
Thanks!I wonder if you could share the outline of your code to access an external oracle database. I've just been given the assignment to do just that, but don't know where to start.
thx,
Mike DeGuire -
"No partition table found" Can't access to my DATA partition
Hello friends! It's happened a catastrophe! Yesterday I was resizing partitions on my hard drive, divided in this way:
1: ext4 containing CHAKRA
2: swap
3: extended
3rd: DATA
And here there were all the other partiotions containing ARCH (/ / var / home)
I decided to delete all partitions of Arch, and to extend the DATA partition with the space available obtained by the elimination of arch ... I have done these steps on chakra, the extended partition was unmounted and free to work, I left my computer on all night without even using it so as not to interfere, but 10 minutes ago I go to check and I discover that my computer is off (do not know why because the battery was attached, maybe electricity went down for a considerable period of time and in any case my battery lasts a very little so it has not resisted until the return of electricity).
I turn my computer on, I sign in it with Chakra and I find out that the extended partition is gone! Partitionmanager tells me "No partition table found" and all my data are disappeared!
Is there a way to remedy this apocalypse? I truly don't know what to do, apart committing suicide!
Last edited by TheImmortalPhoenix (2012-01-18 05:30:20)There is no drive! There is only free space, and all i can do is to create a new partition...
This is the output of fdisk -l
> sudo fdisk -l
Warning: ignoring extra data in partition table 5
Warning: ignoring extra data in partition table 5
Warning: ignoring extra data in partition table 5
Warning: invalid flag 0x131b of partition table 5 will be corrected by w(rite)
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006fdf9
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 31459327 15728640 83 Linux
/dev/sda2 35653632 976766975 470556672 f W95 Ext'd (LBA)
/dev/sda3 31459328 35653631 2097152 82 Linux swap / Solaris
/dev/sda5 ? 3648605328 6213431608 1282413140+ fb VMware VMFS
Partition table entries are not in disk order -
ORA-28150 when accessing data from a remote database
Portal Version: Portal 3.0.9.8.0
RDBMS Version: 8.1.6.3.0
OS/Vers. Where Portal is Installed:: Solaris 2.6
Error Number(s):: ORA-28150
I have a problem with using a database link to access a table in
a remote database. So long as the dblink uses explicit logins
then everything works correctly. When the dblink does not have a
username then I get the ORA-28150 message. The database link is
always public. A synonym is created locally that points to a
synonym in the remote database. I am using the same Oracle user
in both databases. The Oracle portal lightweight user has this
same Oracle user as its default schema. The contents of the
remote table are always visible to sqlplus, both when the link
has a username and when it doesn't have a username.
All the databases involved are on the same version of Oracle.
I'm not sure which Oracle login is being used to access the
remote database, if my lightweight user has a database schema
of 'xyz' then does portal use 'xyz' to access the remote
database? I would be very grateful for any help or pointers that
might help to solve this problem.
James
To further clarify this, both my local and remote databases
schemas are owned by the same login.
The remote table has a public synonym.
The link is public but uses default rather than explicit logins.
The local table has a public synonym that points to the remote
synonym via the database link.
If I change the link to have an explicit login then everything
works correctly.
I can view the data in the remote database with TOAD and with
sqlplus even when the database link has default login rather
than explicit login.
This seems to point to Portal as being the culprit. Can anyone
tell me whether default logins can be used across database links
with portal?
TIA
James832019 wrote:
One way to do this is by creating a database link and joining the two tables directly. But this option is ruled out. So please suggest me some way of doing this.Thus you MUST use two connection strings.
Thus you are going to be either constructing some intricate SQL dynamically or you are going to be dragging a lot of data over the wire and doing an in memory search.
Although realistically that is what the database link table would have done as well.
Might be better to look at moving the table data from one database to the other. Depends on size of course. -
SQL error in the database when accessing a table.
Hi,
I got below error at production server. Please suggest how to reslove this error.
<br>
<br>
<br>
Runtime Errors DBIF_RSQL_SQL_ERROR
<br>
Exception CX_SY_OPEN_SQL_DB
<br>
Date and Time 02.01.2011 15:55:06
<br>
<br>
<br>
<br>
<br>
Short text
<br>
SQL error in the database when accessing a table.
<br>
<br>
<br>
How to correct the error
<br>
Database error text........: "[10054] TCP Provider: An existing connection was
<br>
forcibly closed by the remote host.
<br>
[10054] Communication link failure"
<br>
Internal call code.........: "[RSQL/INSR/SWFCNTBUF ]"
<br>
Please check the entries in the system log (Transaction SM21).
<br>
<br>
If the error occures in a non-modified SAP program, you may be able to
<br>
find an interim solution in an SAP Note.
<br>
If you have access to SAP Notes, carry out a search with the following
<br>
keywords:
<br>
<br>
"DBIF_RSQL_SQL_ERROR" "CX_SY_OPEN_SQL_DB"
<br>
"CL_SWF_CNT_FACTORY_SHMEM======CP" or "CL_SWF_CNT_FACTORY_SHMEM======CM001"
<br>
| "ADD_INSTANCE"
<br>
<br>
<br>
Information on where terminated
<br>
Termination occurred in the ABAP program "CL_SWF_CNT_FACTORY_SHMEM======CP" -
<br>
in "ADD_INSTANCE".
<br>
The main program was "SAPMSSY1 ".
<br>
<br>
In the source code you have the termination point in line 16
<br>
of the (Include) program "CL_SWF_CNT_FACTORY_SHMEM======CM001".
<br>
The termination is caused because exception "CX_SY_OPEN_SQL_DB" occurred in
<br>
procedure "ADD_INSTANCE" "(METHOD)", but it was neither handled locally nor
<br>
declared
<br>
in the RAISING clause of its signature.
<br>
<br>
The procedure is in program "CL_SWF_CNT_FACTORY_SHMEM======CP "; its source
<br>
code begins in line
<br>
1 of the (Include program "CL_SWF_CNT_FACTORY_SHMEM======CM001 ".
<br>
<br>
<br>
<br>
Source Code Extract
<br>
<br>
Line
SourceCde
<br>
<br>
1
METHOD add_instance .
<br>
2
<br>
3
data: ls_id type swfcntbuf.
<br>
4
<br>
5
check buffer method - store in local buffer if necessary
<br>
6
retcode = cl_swf_cnt_factory=>add_instance( ibf_por = ibf_por instance = instance ).
<br>
7
<br>
8
CHECK m_buffer_method EQ mc_buffer_shared.
<br>
9
<br>
10
append key to list of tasks to add stored in database table SWFCNTBUF
<br>
11
will be evaluated by build process for shared memory area (UPDATE_BUFFER method)
<br>
12
<br>
13
ls_id-mandt = sy-mandt.
<br>
14
ls_id-id = ibf_por.
<br>
15
<br>
>>>>>
INSERT swfcntbuf CONNECTION r/3*wfcontainer
<br>
17
FROM ls_id.
<br>
18
<br>
19
IF sy-subrc EQ 0.
<br>
20
Commit seems to be necessary always, even if INSERT has failed, to get rid of
<br>
21
database locks
<br>
22
COMMIT CONNECTION r/3*wfcontainer.
<br>
23
ENDIF.
<br>
24
<br>
25
ENDMETHOD.
<br>duplicate here SQL error in the database when accessing a table.
Do not post the same question in more than on forum. -
MS Access attach table error after migration
I ran the MS Access Migration Wizard 1.5.4 to convert an Access
97 Database to Oracle 7.3.3 (soon to be 7.3.4).
Everthing seemed to go ok, the mdb file shows the local and
remote attached tables, the queries are present etc. Also the
ODBC link is fine within the mdb file, in that I can browse the
data on the Oracle database.
However when I start the MS Access application which is in an
mde file, I get the following error :
modRefresh AttachTables Error: The Microsoft Jet database engine
cannot find the input table or query 'tablename'. Make sure it
exists
and that its name is spelled correctly. (3078)
The application continues to work fine if I put the original mdb
file back.
Is anybody able to help identify what the problem is or how to
turn on some debugging that might help. Any help appreciated as
I don't have a lot of experience with Access.
Thanks,
Mike.
nullMike,
The Migration Wizard only supports one level of linked
tables.
Regards,
Marie
Robert R. Wagner (guest) wrote:
: Mike:
: I'm pretty new to the business of migrating data from Access 97
: to Oracle, but I'm pretty sure that we need to give up using
: linked tables when we do this. This was a (wonderful) benefit
of
: the Jet Engine.
: I'm now looking at using ADO and OLE-DB with the (eventually to
: be) migrated data. It's a whole new learning curve to climb!
: Cheers >>>>>> Robert
: Mike Connell (guest) wrote:
: : I have now realised the error is because Access doesn't seem
: to
: : support linking to already linked tables.
: : That is the mde file is running via linked tables to the mdb
: : file. If the tables in the mdb are renamed and replaced by
: : queries to new tables linked to an Oracle database, the mde
: : fails to find the tables it expects to use.
: : If anybody knows a workaround for running Access to allow
more
: : than one level of table linking, then please let me know.
: : Thanks,
: : Mike Connell
Oracle Technology Network
http://technet.oracle.com
null -
"ghost records" in partitioned tables
Hi,
We observe a very strange behavior of some partitioned table (range method) for a small subset of records. For example:
select b.obj_id0 from event_session_batch_ctlr_t partition(partition_migrate) b, event_session_batch_ctlr_t c
where b.obj_id0=c.obj_id0(+)
and c.obj_id0 is null
This query returns 20 records where it shouldn't returns anything! obj_id0 is the partitioning key and the primary key of the table.
If you query these line directly from the partitioned table, even with a full scan, you don't get anything back. You will get the data if you query from that particular partition (partition_migrate).
We found that these records can sometimes be returned from a query with a joint on this partitioned table, without specifying any partition name, depending on the execution plan.
Have you some explanation for this strange behavior and suggestions for how to solve this problem?
Thank you in advance,
RaphaëlHi,
Retrive those records from backup.So,better u contact basis people.
RTVDLTRCD FROMFILE(xxx) TOFILELIB(yyy) - This Command is used to retrive the deleted records.
This would extract the deleted records in the FROMFILE and write them to the same named file in the library specified for TOFILELIB.
When a record is deleted in a data base file, the data values still exist, but the system places a special hex value in front of the record specifying that it is deleted. The Operating System prevents any access to a deleted record thru its normal interfaces. -
Error: Windows Cannot Be Installed to This Disk. The Selected Disk Has an MBR Partition Table.
Error: Windows Cannot Be Installed to This Disk. The Selected Disk Has an MBR Partition Table. On EFI Systems, Windows Can Only Be Installed to GPT.
I found one solution to this problem on hp forum but my laptop has boot legacy and does not disable the efi boot order.
HP Model# HP 1000-1140TU Notebook PC
Serial# 5cg2481dbg
Product# c8c94pa#uuf
This is solution which is found on hp forum
SolutionThe resolution to this issue depends on the the hard disk volume size:
Follow these steps if the hard disk volume size is less than 2.19 TB:
Temporarily disable the EFI Boot Sources setting in the BIOS:
Restart the computer, and then press F10 to enter the BIOS.
Navigate to Storage > Boot Order , and then disable the EFI Boot Sources .
Select File > Save Changes > Exit .
Install the Windows operating system.
Enable the EFI Boot Sources setting in the BIOS:
Restart the computer, and then press F10 to enter the BIOS.
Navigate to Storage > Boot Order , and then enable the EFI Boot Sources .
Select File > Save Changes > Exit .
Follow these steps if the hard disk volume size is greater than 2.19 TB:
Install the HP BIOS Update UEFI utility from the HP Web site:
Click here to access the document "HP BIOS Update UEFI" .
NOTE:The HP BIOS Update UEFI utility is installed by default on some HP computers.
Follow the steps in the Microsoft document titled "How to Configure UEFI/GPT-Based Hard Drive Partitions" (in English) to create a GPT partition.
Click here to access the document "How to Configure UEFI/GPT-Based Hard Drive Partitions" .
NOTEne or more of the links above will take you outside the Hewlett-Packard Web site. HP does not control and is not responsible for information outside the HP Web site.I am having the same problem. Windows is trying to install. It identifies the various partitions but says that I cannot install Windows on the Boot Camp partition or any other. I select Drive options (advanced) and Format the Boot Camp drive, but it makes no difference.
This is the Windows error:
Windows cannot be installed to this disk. The selected disk has an MBR partition table. On EFI systems, Windows can only be installed to GPT disks.
Windows cannot be installed to this disk. This computer's hardware may not support booting to this disk. Ensure that the disk's controller is enabled in the computer BIOS menu.
I am not sure what Csound1 is suggesting with that post above. There are some involved suggestions over here <https://discussions.apple.com/message/23548999#23548999> about using Disk Utility to delete the Boot Camp partition and create new ones - is that the idea? -
How to partition tables and indexes in this scenario?
So our situation is pretty simple. We have 3 tables.
A, B and C
the model is A->>B->>C
Currently A, B and C are range partitioned on a key created_date however it's typical that only C is every qualfied with created date. There is a foreign key from B -> A and C -> B
we have many queries where the data is identified by state that is indexed currently non partitioned on columns in A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at this time.
It is typical that we qualify A on either account or user or both. There are indexes (non partitioned) on these
We have a problem now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This often results in large full table scans. Our solution has been to remove the leading wildcard but this isn't always possible.
We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on created_date but rarely qualify by that.
We are also wondering where and how we can benefit from either global partitioned index or local partitioned indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.
I am also wondering what impact pushing the state from A that's used to qualify A down to C would have any advantage.
C is the table that currently we qualify with the partition key so I figure if you also pushed down the state from A that's used to qualify the set of C's we want based on the set of B's we want based on the set of A thru qualfying on columns within A.
If we push down some of those values to C and simply use C when filtering I'm wondering what the plans will look like compared to having to work all the way up from the bottom to the top before it begins qualifying A.
Edited by: steffi on Jan 14, 2011 11:36 PMWe are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on >created_date but rarely qualify by that. Very good question. Why did you partition on it? You will never have INSERTS on these partitions, but maybe deletes and updates? The only advantage (I can think of) would be to set these partitions in a read only tablespace to ease backup... but that's a weired reason.
we have many queries where the data is identified by state that is indexed currently non partitioned on columns in >A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at >this time.Of course. Why should they be partitioned by Create_date?
It is typical that we qualify A on either account or user or both. There are indexes (non partitioned) on these
We have a problem now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This >often results in large full table scans. Our solution has been to remove the leading wildcard but this isn't always possible.I would suspect full index scan. Isn't it?
We are also wondering where and how we can benefit from either global partitioned index or local partitioned >indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.As A is not accessed by any partition, why should C and B profit? You should look to partition by the key you are using to access. But, you are looking to tune your SQLs where the access is like '%ACCOUNT' on A. Then when there is a match. ORACLE joins via your index and nested loop (right?) to B and C.
I am also wondering what impact pushing the state from A that's used to qualify A down to C would have any >advantage.Why should it. It just makes the table and indexes larger => more IO.
C is the table that currently we qualify with the partition key so I figure if you also pushed down the state from A >that's used to qualify the set of C's we want based on the set of B's we want based on the set of A thru qualfying >on columns within A.If the access from A to C would be .. AND A.CREATE_DATE =C.CREATE_DATE and c.key like '%what I want%' which does not qualifify for a FK ;-) then, as that could be resulting in a partition scan, you could "profit". But, I'm sure that's not your model.
If we push down some of those values to C and simply use C when filtering I'm wondering what the plans will look >like compared to having to work all the way up from the bottom to the top before it begins qualifying A.So you want to denormalize A,B,C and into one table? With the same access is like '%ACCOUNT' you would get a full scan on an index as before, just the objects would be larger due to redundance and harder to maintain. In the end you would have a bad and slower design.
Maybe you explain what the problem is.
Full index scan can not be avoided, but that can be made faster by e.g. parallel query, and then the join to B and C should be a "snip" if you just identify a small subset of rows in these tables.
Maybe you are looking for
-
I recently exchanged my faulty iPad while travelling.i backed it up to brother's laptop and fneedlessly backed up all iTunes purchases.... I synced new iPad with brother's laptop and now all albums appear but none will play. Any suggestions? Many tha
-
SAP R/3 installation - FRF-00007 Unable to open RFC connection
Hi all I install SAP R/3 Enterprise SR1 with SuSE Linux Enterprise server 9 and SAPDB 7.3.28. At the end of the installation exactly on the " Starting RFC Jobs (post processing" the following problem begins. <b>in the LOG INSTALLATION PROGRESS</b> Se
-
Hi Experts, I'd like to know is it possible to set 4 decimal places in Qty fields, when I make GI for Sales Order. Regards
-
How to get the row's size of ResultSet ?
i can't find a method to return the record number of a ResultSet. how can i get it without using the next() method and select count(*) from sometable
-
Coldfusion 9 and Tomcat 7 on Mac 10.6.7 using jsvc
I'm trying to upgrade our environment to the latest and greatest versions of the above applications. When Coldfusion is deployed in this environment and tomcat jsvc is started using a launch daemon, I get the following error in the catalina.out log: