TestPool method of data source is returning some strange value!!
Hi All
I am trying test a data source connection using the mbean api: testPool. It is giving me some weird output like:
<method $Proxy18.testPool of $Proxy18 instance 5>
My code:
connect('weblogic', 'weblogic1', 't3://localhost:7101')
serverRuntime()
dsMBeans = cmo.getJDBCServiceRuntime().getJDBCDataSourceRuntimeMBeans()
ds_name = "ApplicationDB"
for ds in dsMBeans:
if (ds_name == ds.getName()):
print 'DS name is: '+ds.getName()
print 'State is ' +ds.getState()
print ds.testPool
I get the following output:
DS name is: ApplicationDB
State is Running
<method $Proxy18.testPool of $Proxy18 instance 5>
What does '<method $Proxy18.testPool of $Proxy18 instance 5>' mean?
Hi,
If you want to Test the DataSource then ...please change the last line as :
print ds.testPool()
Example: Below
connect('weblogic', 'weblogic', 't3://localhost:7001')
serverRuntime()
dsMBeans = cmo.getJDBCServiceRuntime().getJDBCDataSourceRuntimeMBeans()
ds_name = "testDS"
for ds in dsMBeans:
if (ds_name == ds.getName()):
print 'DS name is: '+ds.getName()
print 'State is ' +ds.getState()
<font color=maroon>* print ds.testPool()*</font>
<BR>
Thanks
Jay SenSharma
http://jaysensharma.wordpress.com (WebLogic Wonders Are Here)
Similar Messages
-
SSRS Active Directory Data Source LastLogonTimeStamp returns "System._ComObject"
I am working in Report Builder and I have created a Data Source Connection to my Active Directory. I want to get LastLogonTimeStamp from my AD computer objects but when i run the query i get "System._ComObject" instead of a date. All the other
fields return data.
here is my code:
SELECT ADsPath, cn ,objectCategory,name, lastLogonTimestamp
FROM 'LDAP://DC=domain,DC=org'
where objectCategory = 'Computer'
Eventually i would like a query that returns a count of Computer objects where LastLogonTimeStamp is older than 30 days.
I have researched this and found that the LastLogonTimeStamp is a IADsLargeInteger
Value and not an actual date. How can i display this as a date is SSRS Report BuilderHi Ryaed,
The lastLogon and lastLogonTimestamp LDAP timestamps attributes are returned in int64 format (also called Windows NT time format) and the timestamp is the number of 100-nanoseconds intervals (1 nanosecond = one billionth of a second) since Jan 1, 1601 UTC.
So, we cannot tell the exact date and time directly. To convert the Windows NT time format value to human readable date value, you can use the following query:
SELECT ADsPath, cn ,objectCategory,name, DATEADD(Minute,(lastLogonTimeStamp/ 600000000) + DATEDIFF(Minute,GetUTCDate(),GetDate()),CAST('1/1/1601' AS DATETIME2))
FROM 'LDAP://DC=domain,DC=org'
WHERE objectCategory = 'Computer'
Or you can also create a user defined function:
CREATE FUNCTION dbo.udf_Int8_to_DateTime(
@Int8 BIGINT
RETURNS DATETIME2
AS
BEGIN
RETURN (DATEADD(Minute,@Int8 / 600000000 + DATEDIFF(Minute,GetUTCDate(),GetDate()),CAST('1/1/1601' AS DATETIME2)))
END
Then, use the function in the query like below:
SELECT ADsPath, cn ,objectCategory,name, udf_Int8_to_DateTime(lastLogonTimestamp)
FROM 'LDAP://DC=domain,DC=org'
WHERE objectCategory = 'Computer'
Reference:
http://myitforum.com/cs2/blogs/jnelson/archive/2009/08/25/140938.aspx
Regards,
Mike Yin
TechNet Community Support -
Crystal ActiveX Runtime Lib: Change text data source path at run time.
We have some PCs running Crystal Reports 10 and some running CR 9 and 8.5. For each PC, we set up a System DSN ODBC data source (in Control Panel - Administrative Tools) for pulling data from text files to
generate reports.
Recently we wrote some routines (see the Visual Basic example at the
end of this message) to change the path of the data files at runtime.
According to the Crystal Reports Technical Reference Guide, we may use
the method LogOnServer() of an Application object or an DatabaseTable
object. However, we find that this does not work: the PrintOut()
method only pulls data from the default path as configured for the
System DSN, not from the path passed as the third parameter of
LogOnServer(). It does not return any error message.
We have also tried to use SetTableLocation() method, and it still does
not work.
Would any experts examine our code below and advise what we are missing? Thanks.
For the following VB example, we have:
System DSN Name: AP_WORKSHEET
Driver: Microsoft Text Driver
Database Directory: D:\0ood2 (i.e. the default path)
Crystal Report Document: D:\3g\run\Vision\apcyto\Reports\crBlockWS.rpt
(Which specifies that the data source text file name is BlockWS.txt)
Purpose : We would like to read the data source text file from
D:\0ood1 instead of the default path.
Following is the code of the VB macro:
Sub test()
Rem In this version of the subroutine, we call
Rem DatabaseTable.LogOnServer() and "Rem"ed out
Rem Application.LogOnServer() and SetTableLocation().
Rem We have un"Rem"ed each of them and "Rem"ed others and try to run.
Rem In all runs, data are pulled from the default file
Rem D:\0ood2\BlockWS.txt instead of D:\0ood1\BlockWE.txt.
Dim crxapp As CRAXDRT.Application
Dim crxRep As CRAXDRT.Report
Dim crxDB As CRAXDRT.Database
Dim crxTab As CRAXDRT.DatabaseTable
Dim crxConnPs As CRAXDRT.ConnectionProperties
Dim crxConnP As CRAXDRT.ConnectionProperty
Dim apropSubLoc As String
Dim apropConnBufStr As String
Set crxapp = CreateObject("CrystalRuntime.Application")
Rem
crxapp.LogOnServer "p2sodbc.dll", "AP_WORKSHEET", "<CRWDC>DBQ=D:\0ood1",
Set crxRep = crxapp.OpenReport
("D:\3g\run\Vision\apcyto\Reports\crBlockWS.rpt")
Set crxDB = crxRep.Database
Set crxTab = crxRep.Database.Tables(1)
apropConnBufStr = crxTab.ConnectBufferString
apropSubLoc = crxTab.SubLocation
crxDB.LogOnServer "p2sodbc.dll", "AP_WORKSHEET", "<CRWDC>DBQ=D:\0ood1",
Rem crxTab.SetTableLocation "D:\0ood1\BlockWS.txt", apropSubLoc, "DSN="
Rem Set crxConnPs = crxTab.ConnectionProperties
Rem Set crxConnP = crxConnPs.Item("DSN")
Rem crxConnP.Value = "AP_WORKSHEET"
Rem Set crxConnP = crxConnPs.Item("Database")
Rem crxConnP.Value = "D:\0ood1\BlockWS.txt"
Rem crxTab.Location = "BlockWS.txt"
crxRep.DiscardSavedData
crxRep.PrinterSetup (0)
crxRep.PrintOut
End Sub
For VB macros, the problem exists in all of CR 8.5, 9 and 10. However,
for another platform we are using, Unify Vision 4GL, it works for CR
8.5 while not working for CR 9 and 10.
Following is the source code in Unify Vision 4GL. This language may
not be popular, but I thin you are about to see how it calls the
Runtime Library methods LogOnServer(), OpenReport(), PrinterSetup() and
PrintOut().
%gfPrintCrystalReport
BOOL FUNCTION gfPrintCrystalReport($reportName)
BEGIN
if NOTMKNOWN(GF:$oSeagateId) then
create service of activex
class 'CrystalRuntime.Application'
object_ref into GF:$oSeagateId;
if MKNOWN(GF:$oSeagateId) then
begin
/* TD23013: Database directories are dynamic to
accommodate multiple user requirement of Citrix */
send message LogOnServer to GF:$oSeagateId
using
( 'PDSODBC.DLL', 'AP_WORKSHEET', '<CRWDC>DBQ='+GF:$WinTempDir,'','')
identified by $msgHandle;
if $msgHandle:MSG_STATE 'RESPONSE_PROCESSED'
then
begin
display 'Crystal Reports cannot connect
to the datasource ' for fyi_message wait;
return (FALSE)
end
send message OpenReport to GF:$oSeagateId using
($reportName, 1)
identified by $msgHandle returning
$oCrystalReport
if MKNOWN($oCrystalReport) then
begin
if (NOTMKNOWN(GF:$printerName)) then
set GF:$printerName to
$oCrystalReport->PrinterName;
if GF:$printerName $oCrystalReport-
PrinterName then
send message SelectPrinter to
$oCrystalReport
using
(GF:$driverName,GF:$printerName,GF:$portName)
identified by $msgHandle;
set $oCrystalReport-
DisplayProgressDialog to FALSE;
while TRUE
begin
DISPLAY NOTICE 'Print to : ' +
GF:$printerName
LABELS 'Ok'
DEFAULT, 'Cancel', 'Printer Setup'
RESULT INTO $userOption
switch ($userOption)
begin
case 0 :
send
message PrintOut to $oCrystalReport
using
(PROMPT_USER, NUMBER_OF_COPIES, COLLATED, START_PAGE, STOP_PAGE)
identified by $msgHandle;
set
$oCrystalReport to UNDEFINED
return
(TRUE);
case 1:
set
$oCrystalReport to UNDEFINED
return
(FALSE);
case 2:
send
message PrinterSetup to $oCrystalReport
using
(0)
identified by $msgHandle;
if
GF:$printerName $oCrystalReport->PrinterName then
begin
set GF:$printerName to $oCrystalReport->PrinterName;
set GF:$driverName to $oCrystalReport->DriverName;
set GF:$portName to $oCrystalReport->PortName;
end
break;
end
end
end
end
return
(FALSE);
ENDHi Sydney,
If you search the Developers help file you'll find info on using the method:
How to change the data source
This example demonstrates how to change the data source from native Access to an OLEDB (ADO) data source by using the ConnectionProperty Object, as well as how to change the table name by using the Location property of the DatabaseTable Object. CrystalReport1 is connected to the xtreme.mdb database found in the \Program Files\Crystal Decisions\Crystal Reports 10\Samples\En\Databases folder. The report is using the Customer table. A copy of the Customer table is added to the pubs database on Microsoft SQL Server.
' Create a new instance of the report.
Dim Report As New CrystalReport1
Private Sub Form_Load()
' Declare a ConnectionProperties collection.
Dim CPProperties As CRAXDRT.ConnectionProperties
' Declare a DatabaseTable object.
Dim DBTable As CRAXDRT.DatabaseTable
' Get the first table in the report.
Set DBTable = Report.Database.Tables(1)
' Get the collection of connection properties.
Set CPProperties = DBTable.ConnectionProperties
' Change the database DLL used by the report from
' native Access (crdb_dao.dll) to ADO/OLEDB (crdb_ado.dll).
DBTable.DllName = "crdb_ado.dll"
' The connection property bags contain the name and value
' pairs for the native Access DLL (crdb_dao.dll). So we need
' to clear them, and then add the name and value pairs that
' are required to connect to the OLEDB data source.
' Clear all the ConnectioProperty objects from the collection.
CPProperties.DeleteAll
' Add the name value pair for the provider.
CPProperties.Add "Provider", "SQLOLEDB"
' Add the name value pair for the data source (server).
CPProperties.Add "Data Source", "ServerA"
' Add the name value pair for the database.
CPProperties.Add "Initial Catalog", "pubs"
' Add the name value pair for the user name.
CPProperties.Add "User ID", "UserName"
' Add the name value pair for the password.
CPProperties.Add "Password", "password"
' Set the table name. ' for SQL types it would be "database.dbo.table"
DBTable.Location = "Customer"
Screen.MousePointer = vbHourglass
' Set the report source of the viewer and view the report.
CRViewer1.ReportSource = Report
CRViewer1.ViewReport
Screen.MousePointer = vbDefault
End Sub -
Hi there,
I am having a serious issue with The Power BI Data Management Gateway which I am hoping that someone can help me with.
Basically I am setting a connection between a Power BI demo site and a SQL 2012 Database based on Azure. The Data Management Gateway and is up and running, and Power BI has managed to connect to it successfuly.
By following the tutorials at
here I was able to successful create my Data Connection Gateway with a self-signed certificate.
However, when trying to create the data source I come into problems. The Data Source Manager manages to successfully resolve the hostname, as per the screenshot below:
Bear in mind that I exposed the require ports in Azure as endpoints and I managed to modify my hosts file on my local machine so I could access the SQL server hosted in Azure using its internal name -- otherwise I would not be able to get this far.
However the creation of the data source also fails when trying to created it whilst logged in the SQL server in question:
The Data Source Manager returns the error when using the Microsoft OLE DB Provider for SQL Server:
Failed to test connection. [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied
I tried using the SQL Server Native Client 11.0 instead but I also get an error. This time the error is:
Failed to test connection. Login timeout expiredA network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.Named Pipes Provider: Could not open a connection to SQL Server [53].
Some considerations
If I provide an invalid username/password, the Data Source Manager does say that the username and password is incorrect.
Firewall is turned off in the SQL Server (either way, this error also happens if I try top use the Data Source Manager whilst logged in the SQL Server itself).
SQL Profiler does not show any attempt of connection.
The SQL server instance in question is the default one.
The error happens regardless if I select the option to encrypt connection or not.
In SQL Configuration manager I can see that all protocols are enabled (TCP/IP, Named Pipes and Shared Memory.
The Event Viewer does not provide any further errors than the one I have copied in this post.
I'm at a loss here. Could someone please advise what might I be doing wrong?
Regards,
P.Here is what I had to do to solve this issue:
Basically I had to add the MSSQL TCP/IP port as an end-point in Azure. After I did that, then I was able to create the data-source. However, I was only able to authenticate with a SQL account, as any domain account would return me an error saying that the
domain isn't trusted.
What puzzles me here is how come the Data Source Manager would inform me that an account username/password was invalid, but it would fail/timeout if I provided valid credentials (!?!?!!?) -
Data source on top of virtual cube s_r_infoprov expection
hi experts,
i have generated a export datasource on virtual cube. The virtual cube is based on a function module.
When I display data in virtual cube it shows all the data .When I extract the data from it through DTP , it works well.
But when i am using data source(exported data source) to pull the data from the virtual cube it is not pulling the data.
It is throwing an exception at the following place.
CALL FUNCTION 'RSDRI_INFOPROV_READ'
EXPORTING
"begin qyh030513
i_infoprov = l_infoprov "'/CPMB/ASFIN'
* i_infoprov = do_util->do_appl_info_m-MULTIPROV "'/CPMB/ASFIN'
"end qyh030513
i_th_sfc = lth_sfc
i_th_sfk = lth_sfk
i_t_range = lt_range
i_rollup_only = space
"i_use_db_aggregation = abap_true "abap_true "RS_C_TRUE
i_use_db_aggregation = IF_DB_AGGREGATE "abap_true "RS_C_TRUE
i_use_aggregates = abap_true "abap_true "RS_C_TRUE
i_packagesize = i_packagesize
i_authority_check = space
i_currency_conversion = space
IMPORTING
e_t_data = et_data
e_split_occurred = e_split_occurred
e_end_of_data = e_end_of_data
e_stepuid = l_stepuid
CHANGING
c_first_call = c_first_call
EXCEPTIONS
illegal_input = 1
illegal_input_sfc = 2
illegal_input_sfk = 3
illegal_input_range = 4
illegal_input_tablesel = 5
no_authorization = 6
illegal_download = 8
illegal_tablename = 9
OTHERS = 11.
When we go inside this function module in debugging the below code will be there
STATICS: s_r_infoprov TYPE REF TO cl_rsdri_infoprov.
CLEAR: e_t_msg.
IF i_clear = rs_c_true.
CLEAR s_r_infoprov.
RETURN.
ENDIF.
IF s_r_infoprov IS NOT INITIAL AND c_first_call = rs_c_true.
* nested call of RSDRI_INFOPROV_READ
MESSAGE e882(dbman) RAISING illegal_input.
ENDIF.
Problem is with the s_r_infoprov value. When I execut with data source in RSA3 then the value is {O:222*\CLASS=CL_RSDRI_INFOPROV}
and when I executed directly the virtual cube the value for s_r_infoprov is initial.
So because of this value {O:222*\CLASS=CL_RSDRI_INFOPROV} the following code got executed and throwing an exception.
Experts please guide how to solve this. will data source on top of virtual cube wont work?
thank youHi Sander,
Thank you for the reply.
I am trying to understand why an virtual cube is not working if it is generated as an exported data source and it is working perfectly if directly connected as a source to DSO or Cube?
Problem is with the s_r_infoprov value. It is getting populated differently , that is why there is an exception which using data source(virtual cube generated as exported data source).
If this functionality is not going to work then why SAP has given an option to generate an exported DS of virtual cube.
thank you for all giving some valuable inputs on this. Some how some thing is missing it seems.... -
Hi,
I am working on the PM cube 0PM_C01 .I have to enhance this data source to get some additional data .
The requirement is to create a custom stagin DSO for this cube 0PM_C01 & then perform the reporting from a cube .
Should I copy the Std InfoSource ZPM_0M_0PA_2 to custom Infosurce say XPM_OM_0PA_2 & then add a new info object to hold the enhanced data , after which I will map this infosurce to the staging DSO .
using this approach I will make sure all the update rules & transfer rules are captured .
Or
Can I just map(tranformation ) the enhanced data source 0PM_0M_OPA_2 to the staging DSO using BI 7
If this approach is used how can i make sure the std update rules & transfer rules are applied .
I need to make sure all the current data in the data source is pulled into the DSO.
Please let me knowHi there,
I would use BI 7 transformation rather than transfer and update rules...
When moving transfer and update rules to transformation:
1) rightclick on transfer / update rules -> Additional functions -> create transformation
2) make sure that you have all the routines in old rules converted to classes in transformation. This has to be done manually going through the code.
3) Enhance the datasource and apply necessary rules in the transformation for the enhanced objects
4) Load to DSO
Edited by: Phani Kothwali on Mar 4, 2010 6:26 AM -
Performance issue with big CSV files as data source
Hi,
We are creating crystal reports for a large banking corporation with CSV files as data source. For some reports, we need join 2 csv files. The problem we met now is that when the 2 csv files are very large (both >200M), the performance is very bad and it takes an hour or so to refresh the data in Crystal Reports designer. The same case for either CR 11.5 or CR 2008.
And my question is, is there any way to improve performance in such situations? For example, can we create index on the csv files? If you have ever created reports connecting to CSV, your suggestions will be highly appreciated.
Thanks,
RayCertainly a reasonable concern...
The question at this point is, How are the reports going to be used and deployed once they are in production?
I'd look at it from that direction.
For example... They may be able to dump the data directly to another database on a separate server that would insulate the main enterprise server. This would allow the main server to run the necessary queries during off peak hours and would isolate any reporting activity to a "reporting database".
This would also keep the data secure and encrypted (it would continue to enjoy the security provided by an RDBMS). Text & csv files can be copied, emailed, altered & deleted by anyone who sees them. Placing them in encrypted .zip folders prevents them from being read by external applications.
<Hope you liked the sales pitch I wrote for you to give to the client... =^)
If all else fails and you're stuck using the csv files, at least see if they can get it all on one file. Joining the 2 files is killing you performance wise... More so than using 1 massive file.
Jason -
CAF and custon data source in CE 7.2?
Hello,
is it in CE 7.20 possible to use the CAF Framerwork with a custom JDBC data source instead of the system data base?
Best Regards
Sebastian
Edited by: Sebastian Lenk on Mar 2, 2011 5:25 PMHi,
in theory this should work because even SAP and CAF do not break the way Java EE handles database persistence, in practice it is very difficult, probably impossible.
CAF generates something like a Java EE 5 application with JPA 1.0 persistency, so it is in principle unaware of a system datasource. If you look at src/META-INF/persistence.xml, you find a jta-datasource called SAP/CAF_RT which is an alias to system datasource. This persistence.xml is regenerated every time you generate your CAF app so your changes are lost.
In NWA, you cannot change the alias SAP/CAF_RT to point to another JDBC Datasource (at least I did not manage it - getting an error message).
And there are about 25 CAF*-tables in the schema of system datasource which CAF will probably need and maybe will expect in the same datasource as your app tables.
In principle, you could try to create these CAF* tables in another database scheme and maybe it is somehow possible to let SAP/CAF_RT point to another data source by editing some of SAP's CAF ears, but this sounds experimental.
So I am quite sure CAF only supports the system datasource. CAF was not invented to support agilty and degrees of freedom for the developer.
Cheers
--Rolf
P.S. IMHO with EJB 3.x and JPA it is not a good idea to use CAF for persistence at all. With EJBs there is even no good reason to use for local apps at all. CAF was useful in times EJB 2.x where development of persitent classes took a lot of boilerplate code. SAP updated the framework towards EJB 3 but kept the unflexible structure whereas with JPA 1.0 it is easy to learn and write @Entity, @Column, EntityManager.find or EntityManager.persist. E.g. the CAF way to handle relations between tables is a pain and the programming model with data access is outdated. And CAF way of model driven development is too unflexible to support an agile development process in 2011. So use CAF for small projects that will not increase. -
Problem opening ODBC data source in Windows 2003
I am having difficulty opening a ODBC data source in SQL
Server 2000 running on Windows 2003 using ColdFusion MX7 (it works
cleanly on the same database on windows 2000 server). Anyone has
any ideas.Here it is, there is an issue with our Search functionality with the new SDN:
Symptom
When setting location on a Crystal Reports XI Report using the Progress OpenEdge ODBC Driver in Crystal Reports 2008 or Crystal Reports 2011 get “Some tables could not be replaced, as no match was found in the new data source. Please specify the table required for any unmodified tables.”
Environment
Crystal Reports 2008 or 2011
Progress OpenEdge ODBC Driver
Reproducing the Issue
Open a Crystal Report created in CR XI Report using the Progress OpenEdge ODBC Driver in Crystal Reports 2008 or 2011
Set location to a new ODBC Data Source
Get error "some tables could not be replaced, as no match was found in the new data source. Please specify the table required for any unmodified tables.”
Cause
Crystal Reports XI only creates aliases for tables with special characters or duplicate names.
Crystal Reports 2008 introduces a change which requires aliases for all tables.
Aliases are not created when doing a Set Location on the whole data source.
If a Customer has hundreds of reports they cannot set location of each table and fix the aliases.
Resolution
Upgrade to Crystal Reports 2011 Support Pack 02 or higher
Under registry key [HKEY_LOCAL_MACHINE\SOFTWARE\SAP BusinessObjects\Suite XI 4.0\Crystal Reports\Database\ODBC] create string UseTableNameAsAlias and set it to PGPRO1019.DLL, PGOE1022.DLL. Registry file is attached.
Keywords
Set Location
Progress ODBC
Crystal Reports 2008 2011 -
Setting Up an ODBC Data Source in Windows XP for PostgreSQL Unicode
Can someone help me setup an ODBC data source for a PostgreSQL Unicode database?
1. Under which tab(s) should I setup the Data Source: User DSN, System DSN, File DSN?
2. The check boxes on Advanced Options Pages 1 and 2, which should be checked?
I have Crystal Report XI, Windows XP, and PostgreSQL Unicode Version 08.1.0100.
Thanks!
GlynnHere it is, there is an issue with our Search functionality with the new SDN:
Symptom
When setting location on a Crystal Reports XI Report using the Progress OpenEdge ODBC Driver in Crystal Reports 2008 or Crystal Reports 2011 get “Some tables could not be replaced, as no match was found in the new data source. Please specify the table required for any unmodified tables.”
Environment
Crystal Reports 2008 or 2011
Progress OpenEdge ODBC Driver
Reproducing the Issue
Open a Crystal Report created in CR XI Report using the Progress OpenEdge ODBC Driver in Crystal Reports 2008 or 2011
Set location to a new ODBC Data Source
Get error "some tables could not be replaced, as no match was found in the new data source. Please specify the table required for any unmodified tables.”
Cause
Crystal Reports XI only creates aliases for tables with special characters or duplicate names.
Crystal Reports 2008 introduces a change which requires aliases for all tables.
Aliases are not created when doing a Set Location on the whole data source.
If a Customer has hundreds of reports they cannot set location of each table and fix the aliases.
Resolution
Upgrade to Crystal Reports 2011 Support Pack 02 or higher
Under registry key [HKEY_LOCAL_MACHINE\SOFTWARE\SAP BusinessObjects\Suite XI 4.0\Crystal Reports\Database\ODBC] create string UseTableNameAsAlias and set it to PGPRO1019.DLL, PGOE1022.DLL. Registry file is attached.
Keywords
Set Location
Progress ODBC
Crystal Reports 2008 2011 -
10.1.3 Migration Issue with data-sources.xml
Hi:
Running into lots of migration issues for an application built in 10.1.3 EA, migrating to 10.1.3. production, mostly in the BC project. Here's one: the database connections built in EA were not carried over to prod, so they needed to get re-built. No biggie. But now the data-sources.xml file in the model project has duplicate entries for my data sources. I have tried directly editting that file and removing the duplicates, but when I go to run my application (ADF Faces & ADF BC) JDeveloper hangs. When I abort JDev and re-start it, the duplicate entries are back in data-sources.xml (and yes, I did save the file after making the changes). It seems that the xml file is built dynamically from definitions stored elsewhere. Where is the source? What will I need to modify to get the changes to take?
Thanks.
Johnny LeeThere is new control over when/whether JDeveloper add/updates your data-sources.xml file for the embedded server. The settings are located here:
Tools | Embedded OC4J Server Preferences | Current Workspace | Data Sources
There are some settings there that control whether/when we update the data-sources.xml -
COPA data source delivers incorrect values
Hello
We have one PL report in R3. I generated COPA data source and use it to extract data in BW. However, there is a difference between report in R3 and report in BW. Is it possible that COPA-generated data source does not bring correct values? How to check it and correct?
I am 100% sure that report on R3 and BW uses the same logic.Hi Aleksanders,
Check some of the values using RSA3 tcode and check the difference between these values and the R/3 Report values...If there is a difference then datasource is not created properly...if it is matching then on bw side..load the data as of time and compare it to R/3 data.....definitly it helps you....
regs,
mahantesh -
Problem while generating classification data-sources
Hi ,
I have one classification data-source 1CL_OVEN001 which is based on vendor master data and class type 10. I tried to added one field in this data-source . while generating data-source, I got some error like " Extractor can not be generated".
Then I searched on SAP site and got one note 1946309 . When I implemented this note and tried to generated data-source. Now I am getting another error.
"EXTRACT STRUCTURE IS NOT PERMITTED"
I have tried with several note but getting same error.
Please help.
Thanks in advance
Devesh VarshneyHi,
What field you have added, if it is Keyfigure then make sure that you have added its unit.
-Sriram -
How to use a property from other data source
Scenario: we have 2 datasources, that each have their own set of records having different properties. What I need to do is grab 1 property that exists in all records of 1 of the datasources and display it in the records of the other datasource. There is 1 "common" property (called by different names) in the records of both Data Sources, that have the same value; so this property could be the link between those records even though they belong to 2 different Data Sources.
Here's a simple example:
Data Source " Department" has records that have the properties: 'Name', 'Age' and 'Nationality'.
Data Source "Staff" has records that have the properties:'Staff_name', 'Position'.
The 'Name' and 'Staff_name' have the same values, and can be used to link the 2 records.
I'd like to get the records in the 'Department' data souce to also contain the property 'Position" from the "Staff" datasource .
Is there a manipulator that can do this?
Thanks!
JThanks Pravin,
But I don't want to join all data from the 2 datasources - I just want one property from one data source to appear in the records of the other data source. -
Some doubts in using resource-ref,connection pool n Data-source
Hi all ,
I need little bit clarification in the following points.
1) Does the connection pool which is created can be displayed in JNDI Tree???
I am seeing only DataSOurce in the JNDI Tree Only.
2)In case of Bean Managed Persistance, in ejb-jar.xml,
I had given the following info.
<resource-ref>
<res-ref-name>ramukkDataSource</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
</resource-ref>Does the <res-ref-name> refers to Datasource Name??
In weblogic-ejb-jar.xml
<reference-descriptor>
<resource-description>
<res-ref-name>ramukkDataSource</res-ref-name>
<jndi-name>ramukkpool</jndi-name>
</resource-description>
</reference-descriptor>Here <jndi-name> refers to connection pool as per weblogic bible book.
If so when i deployed my ejb into the server iam getting Datasource cant be found.
If i had given like the following,
In ejb-jar.xml
<resource-ref>
<res-ref-name>jdbc/ramuJndi</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
</resource-ref> In weblogic-ejb-jar.xml
<reference-descriptor>
<resource-description>
<res-ref-name>jdbc/ramuJndi</res-ref-name>
<jndi-name>jdbc/ramuJndi</jndi-name>
</resource-description>
</reference-descriptor>Then only my ejb code is successfully deploying doing some work which is as per xpectation.
Can any body tell why I have to give the same name for <res-ref-name> n <jndi-name>??
I am working on this problem from last one week. Still not found the solution.
Connection Pool Creation
GENERAL::
Name : ramukkpool
url : jdbc:mysql://localhost:3306/test
Driver Classname: com.mysql.jdbc.Driver
Properties :
user = root
Password = XXX
create
TARGETS::
i had shifted myServer from left side to right side n clicked >>Apply
(Techncially can we say this as deploying the connection pool into server ???????????
If not how to deploy the connection pool into server??)
I did not get any errors in the console.
Now i am creating a datasource
CONFIGURATION:
Name : ramukkDataSource
JNDIName: jdbc/ramuJndi(Does we have to follow this convention only?? i.e JNDIName should start with jdbc/ only)
PoolName: ramukkpool
create
TARGETS::
I had shifed myServer from left to right n >>Apply.
Now also i did not get any errors in the console.Thanx(in advance),
ramuI have read the documentation.
I changed my class to oracle.jdbc.pool.OracleConnectionCacheImpl from
oracle.jdbc.pool.OracleConnectionPoolDataSource
I observed in the EM that the Open JDBC Connections and Total JDBC Connections are the same. When I used oracle.jdbc.pool.OracleConnectionPoolDataSource my Total JDBC Connections was increasing and Open JDBC Connections remains 0.
My question still remains unanswered, could some kindly help.
Q? I have defined a data source in JDeveloper using
oracle.jdbc.pool.OracleConnectionCacheImpl
In my java bean I am using the code pasted below to make a connection to database.
Can some one tell whether I am using the correct method of connection pooling mechanism or do I need to make some changes? This application uses JSP and used by lots of people which hits database very frequently.
Any help is highly appreciable.
Thanks in advance.
*******Code to make connection*********
private javax.naming.InitialContext context = null;
private javax.sql.DataSource jdbcURL = null;
private static final String url = "jdbc/ProdCoreDS";
public boolean openConnection()
try
DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
context = new javax.naming.InitialContext();
jdbcURL = (javax.sql.DataSource)context.lookup(url);
con = jdbcURL.getConnection();
return true;
catch(Exception e)
System.out.println("Error in the Connection "+e);
e.printStackTrace();
return false;
}}
Maybe you are looking for
-
Panasonic .MOV files only play audio no video in Premiere Elements 11??? Not Working Help
-
Os x mountain lion, i got the message "hash mismatch", the icon then disappeared.
I was downloading the os x mountain lion, i then got the message that download is interrupted hash mismatch. i restarted my macbook, then the mountain lion icon disappeared from the purchased item list in my app store. however, it is shown that i hav
-
How to go to related line number
hi all, Cause: FDPSTP failed due to ORA-06502: PL/SQL: numeric or value error: character string buffer too small ORA-06512: at "APPS.WSH_UTIL_CORE", line 2093 ORA-06512: at "APPS.UML_POS_ORDER_IMPORT_PKG", line 923 This is package and I would like ho
-
Possible to print VI documetation in "landscape"?!
See subject, is this possible? When printing out the frontpanel ov a VI is is always in "portrait" format. Are there some properties to modify? For printing out graphs in a report etc. it would be more usefull to print in in "landscape" format..! Sol
-
Please help button icons on screen
Hi my problem is when i press a button like command or shift on the keyboard its symbol appears on screen and the function of the button changes. this has never happened before all i want is for them to just go back to how they were before. please he