Global cons system - Query
Can we use GCS to consolidate multiple ledgers (each LE Ledger) with same COA and different Accounting convention in to a parent Ledger which is also the same COA?
If cannot use GCS then what is the method of consolidation that can be used in this scenario.
Sunil,
Just right click on the created structure and give a technical name and description.
It will be saved so that u can use it across all queries under the infoprovider.
Doodle
Similar Messages
-
Null Pointer Exception on stmt = conn.prepareStatement( query );
I configured the Tomcat 5.0.27 for using the connection pool feature and connecting to the Oracle. I am testing if I did the configuration correctly. Therefore, I try to retrieve some data from a table. I got the
'Null Pointer Exception" when executing the
stmt = conn.prepareStatement( query );I am not sure where the problem is from.
1. configuration is not done properly?
2. some bugs in my program?
I need your help to diagnose the problem. The messages in the Tomcat log are shown below my program coce.
The code for class where the 'Null Pointer Exception' occurred is like:
package org.dhsinfo.message.dao;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collection;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.PreparedStatement;
import java.sql.Statement;
import java.sql.SQLException;
import org.dhsinfo.message.MemberBean;
import org.dhsinfo.message.exceptions.MemberDAOSysException;
import org.dhsinfo.ConnectionPool.DBConnection;
public class OracleMemberDAO implements MemberDAO
// Here the return type is Collection
public Collection findMembers()
throws MemberDAOSysException
Connection conn = null;
PreparedStatement stmt = null;
ResultSet rs = null;
MemberBean memberBean;
Collection members = new ArrayList();
String query = "SELECT name FROM PersonType";
try
conn = DBConnection.getDBConnection();
stmt = conn.prepareStatement( query ); // line number 32
rs = stmt.executeQuery();
while( rs.next() )
memberBean = new MemberBean();
memberBean.setName( rs.getString( "name" ) );
members.add( memberBean );
return members;
catch (SQLException se)
se.printStackTrace( System.err );
throw new MemberDAOSysException("SQLException: " + se.getMessage());
finally
if ( conn != null )
try
rs.close();
rs = null;
stmt.close();
stmt = null;
conn.close();
catch( SQLException sqlEx )
System.out.println( "Problem occurs while closing " + sqlEx );
conn = null;
}java.lang.NullPointerException
at org.dhsinfo.message.dao.OracleMemberDAO.findMembers(OracleMemberDAO.java:32)
at org.dhsinfo.message.MemberService.getMembers(MemberService.java:18)
at org.dhsinfo.message.SendMessage.execute(SendMessage.java:29)
at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:484)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:274)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1482)
at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:525)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:799)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:705)
at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:577)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683)
at java.lang.Thread.run(Thread.java:534)Here is my DBConnection.java class. I have used it many times. I feel like going one by one. If it is not my DBConnection .java class, then the next thing to look at is my connection pool configuration. I have a feeling that it is my connection pool configuration.
package org.dhsinfo.ConnectionPool;
import java.sql.Connection;
import java.sql.SQLException;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.sql.DataSource;
public class DBConnection
public static Connection getDBConnection() throws SQLException
Connection conn = null;
try
InitialContext ctx = new InitialContext();
DataSource ds = ( DataSource ) ctx.lookup( "java:comp/env/jdbc/OracleDB" );
try
conn = ds.getConnection();
catch( SQLException e )
System.out.println( "Open connection failure: " + e.getMessage() );
catch( NamingException nEx )
nEx.printStackTrace();
return conn;
} -
Is there a way to configure and store global or system variables - not related to any one specific entity type or entity instance
- something system wide that I can query almost like a global session variable? any suggestions?thanks, I think that will work. I basically created a custom object with no associated objects, just the basic
name and description (which I'll use like a name/value pair). -
Hello everybody,
I did some development using the NWDS (JDI installed). After i have checked in my application / activated it and released it it was shown in the import queue at the Consolidation Tab of the CMS application. So far everything worked great, no error what so ever. So i continued to import the application into the Cons system. But it failed. The logs at the detail view says the error occured during the CBS-make step.
Here is the log:
Info:Starting Step CBS-make at 2006-03-25 11:29:47.0966 +1:00
Info:wait until CBS queue of buildspace ABS_ESS_C is completely processed, before starting the import
Info:waiting for CBS queue activity
Info:wait until CBS queue of buildspace ABS_ESS_C is completely processed, before asking for build results
Info:waiting for CBS queue activity
Info:build process already running: waiting for another period of 30000 ms (1)
Info:CBS server log has been written to CBS log
Fatal:the compartment abs-team.de_ABS_ESS_1 contains dirty DCs after the CBS build process:
Info:dirty DC in compartment abs-team.de_ABS_ESS_1 (dcname=ess/example2 dcvendor=abs-team.de)
Fatal:communication error: CBS error: dirty DCs in buildspace after the CBS build process
Info:Step CBS-make ended with result 'fatal error' ,stopping execution at 2006-03-25 11:30:18.0778 +1:00
The CBS server log mentioned above says the following
Build number assigned: 15596
Change request state from QUEUED to PROCESSING
ACTIVATION request in Build Space "ABS_ESS_C" at Node ID: 7,411,650
[id: 6,706; parentID: 0; type: 4]
[options: IGNORE BROKEN DC, FORCE ACTIVATE PREDECESSORS, FORCE INTEGRATED, IGNORE COMPONENT INTERSECTION]
REQUEST PROCESSING started at 2006-03-25 10:29:54.169 GMT
===== Pre-Processing =====
List of activities to be activated:
The following components belong to activities which already have been activated before:
abs-team.de/ess/example2
They will be removed from this request.
Analyse dependencies to predecessor activities... started at 2006-03-25 10:29:54.200 GMT
Analyse dependencies to predecessor activities... finished at 2006-03-25 10:29:54.200 GMT and took 0 ms
Analyse activities... started at 2006-03-25 10:29:54.200 GMT
Analyse activities... finished at 2006-03-25 10:29:54.200 GMT and took 0 ms
Calculate all combinations of components and variants to be built...
Prepare build environment in the file system... started at 2006-03-25 10:29:54.216 GMT
Synchronize development configuration... finished at 2006-03-25 10:29:54.216 GMT and took 0 ms
Synchronize component definitions... finished at 2006-03-25 10:29:54.216 GMT and took 0 ms
Synchronize sources...
...Skipped for Activation with option : IGNORE BROKEN DC
Synchronize sources... finished at 2006-03-25 10:29:54.216 GMT and took 0 ms
Synchronize used libraries...
...Skipped for Activation with option : IGNORE BROKEN DC
Synchronize used libraries... finished at 2006-03-25 10:29:54.216 GMT and took 0 ms
Prepare build environment in the file system... finished at 2006-03-25 10:29:54.216 GMT and took 0 ms
===== Pre-Processing ===== finished at 2006-03-25 10:29:54.216 GMT and took 32 ms
===== Processing =====
===== Processing ===== finished at 2006-03-25 10:29:54.216 GMT and took 0 ms
===== Post-Processing =====
Check whether build was successful for all required variants...
..SKIPPED. Request option "IGNORE BROKEN DC" was given.
Update component metadata...
STORE build results...
Change request state from PROCESSING to SUCCEEDED
Analyse effect of applied changes to buildspace state... started at 2006-03-25 10:29:54.231 GMT
Handle Cycles...
No cycles detected.
Determine components that have become DIRTY due to this request...
No such components have been found.
Integrate activities into active workspace(s)...
Nothing to integrate in compartment abs-team.de_ABS_ESS_1
Analyse effect of applied changes to buildspace state... finished at 2006-03-25 10:29:55.028 GMT and took 797 ms
Request SUCCEEDED
===== Post-Processing ===== finished at 2006-03-25 10:29:55.028 GMT and took 812 ms
REQUEST PROCESSING finished at 2006-03-25 10:29:55.028 GMT and took 859 ms
I tried to build the application using the CBS. So i went to the respective cons build space and the DC shows with compile state yellow (middle icon). I clicked on the DC and build it. The buidling process failed, with the following request log:
Build number assigned: 15696
Change request state from QUEUED to PROCESSING
BUILD request in Build Space "ABS_ESS_C" at Node ID: 7,411,650
[id: 6,747; parentID: 0; type: 2]
REQUEST PROCESSING started at 2006-03-25 11:04:39.622 GMT
===== Pre-Processing =====
Calculate all combinations of components and variants to be built...
"abs-team.de/ess/example2" variant "default"
Prepare build environment in the file system... started at 2006-03-25 11:04:39.872 GMT
Synchronize development configuration... finished at 2006-03-25 11:04:39.872 GMT and took 0 ms
Synchronize component definitions... finished at 2006-03-25 11:04:40.294 GMT and took 422 ms
Synchronize sources...
Prepare build environment in the file system... finished at 2006-03-25 11:04:40.512 GMT and took 640 ms
===== Pre-Processing ===== finished at 2006-03-25 11:04:40.512 GMT and took 890 ms
Change request state from PROCESSING to FAILED
ERROR! The following problem(s) occurred during request processing:
ERROR! The following error occurred during request processing:Failed to synchronize D:usrsapER4DVEBMGS00j2eeclusterserver0tempCBS3e.CACHE296DCsabs-team.deessexample2_comp/src/packages
REQUEST PROCESSING finished at 2006-03-25 11:04:40.512 GMT and took 890 ms
Anyone has an idea what could be the problem, as this error occurs with every application i developed so far. Everything worked great until i have to import it to the cons system.
p.s.: importing the standard SAP ESS SCs into the Cons system did work great, just with my very own applications i get this problem
best regards,
MarkusHello Kiran,
yes i have made sure that these 3 required SCs are added.
everything is working from loading the Development Configuration into the NWDS, to checking in the changes, to activate them and to release them. Just when it comes to import them into the CONV Workspace, i get the error.
When i click on Import at the CMS i get this first error at the default trace (right after i initiated the import, first entry made!)
SQL statement is 'SELECT 'VSE'.'OBJECTID','VO'.'OBJECTTYPE' 'INTERNALTYP
E','VSE'.'DEACTIVATED','R'.'RESOURCEID','R'.'RESOURCETYPE'
,'R'.'OBJNAME','R'.'PATHURI','R'.'PATHID','R'.'FULLPATHID'
,'R'.'DISPLAYNAME','R'.'CREATORNAME','R'.'CREATIONTIME','V
SE'.'LASTMODIFIED' 'MODIFICATIONTIME','R'.'CONTENTTYPE','R
'.'CONTENTLENGTH','R'.'CONTENTLANGUAGE','R'.'ISMASTERLANGU
AGE','R'.'SOURCEURI','V'.'ISDELETED','R'.'DOCUMENTTYPE','R
'.'FORMATVERSION','R'.'DOCTYPESTATUS','R'.'CONTENTSTOREID'
,'R'.'FULLCONTSTOREID','R'.'CONTENTMD5','R'.'TOUCHEDPROPER
TY','CS'.'ISINDELTA','CS'.'DELTAALGORITHM','CS'.'CONTENTBL
OB','VSE'.'ACTIVATIONSEQNO' FROM 'PVC_VSETELEMENT' 'VSE'
INNER JOIN 'DAV_RESOURCE' 'R' ON 'R'.'RESOURCEID'
= 'VSE'.'VERSIONID' INNER JOIN 'PVC_VERSIONEDOBJ' 'VO'
ON 'VSE'.'OBJECTID' = 'VO'.'OBJECTID' INNER
JOIN 'PVC_VERSION' 'V' ON 'VSE'.'VERSIONID'
= 'V'.'VERSIONID' LEFT OUTER JOIN 'DAV_CONTENTSTORE' 'CS'
ON 'CS'.'CONTENTSTOREID' = 'R'.'CONTENTSTOREID'
WHERE 'VSE'.'VERSIONSETID' = ? AND 'R'.'PATHURI' LIKE ?
ESCAPE ? AND 'R'.'OBJNAME' LIKE ? ESCAPE ?
AND 'VSE'.'LASTMODIFIED' > ? AND 'VSE'.'ACTIVATIONSEQNO'
> ? ORDER BY 7
DESC,9,3'.","Error","/System/Database/sql/jdbc/direct","co
m.sap.sql.jdbc.direct.DirectPreparedStatement","sap.com/tc
~dtr~enterpriseapp","SAPEngine_Application_Thread[impl:3]
_26","7411650:D:usrsapER4DVEBMGS00
j2eeclusterserver0
logdefaultTrace.trc","000C297D05810064000002B90000035C00
040FE39A4D1378","com.sap.sql.jdbc.direct.DirectPreparedSta
tement","-2009,I2009,[-2009]: Join columns too
long,erp04.abs-
team.de:ER4:SAPER4DB,SELECT 'VSE'.'OBJECTID','VO'.'OBJECTT
YPE' 'INTERNALTYPE','VSE'.'DEACTIVATED','R'.'RESOURCEID','
R'.'RESOURCETYPE','R'.'OBJNAME','R'.'PATHURI','R'.'PATHID'
,'R'.'FULLPATHID','R'.'DISPLAYNAME','R'.'CREATORNAME','R'.
'CREATIONTIME','VSE'.'LASTMODIFIED' 'MODIFICATIONTIME','R'
.'CONTENTTYPE','R'.'CONTENTLENGTH','R'.'CONTENTLANGUAGE','
R'.'ISMASTERLANGUAGE','R'.'SOURCEURI','V'.'ISDELETED','R'.
'DOCUMENTTYPE','R'.'FORMATVERSION','R'.'DOCTYPESTATUS','R'
.'CONTENTSTOREID','R'.'FULLCONTSTOREID','R'.'CONTENTMD5','
R'.'TOUCHEDPROPERTY','CS'.'ISINDELTA','CS'.'DELTAALGORITHM
','CS'.'CONTENTBLOB','VSE'.'ACTIVATIONSEQNO'
FROM 'PVC_VSETELEMENT' 'VSE' INNER
JOIN 'DAV_RESOURCE' 'R' ON 'R'.'RESOURCEID'
= 'VSE'.'VERSIONID' INNER JOIN 'PVC_VERSIONEDOBJ' 'VO'
ON 'VSE'.'OBJECTID' = 'VO'.'OBJECTID' INNER
JOIN 'PVC_VERSION' 'V' ON 'VSE'.'VERSIONID'
= 'V'.'VERSIONID' LEFT OUTER JOIN 'DAV_CONTENTSTORE' 'CS'
ON 'CS'.'CONTENTSTOREID' = 'R'.'CONTENTSTOREID'
WHERE 'VSE'.'VERSIONSETID' = ? AND 'R'.'PATHURI' LIKE ?
ESCAPE ? AND 'R'.'OBJNAME' LIKE ? ESCAPE ?
AND 'VSE'.'LASTMODIFIED' > ? AND 'VSE'.'ACTIVATIONSEQNO'
> ? ORDER BY 7 DESC,9,3,","-2009,I2009,[-2009]: Join
columns too long,erp04.abs-
team.de:ER4:SAPER4DB,SELECT 'VSE'.'OBJECTID','VO'.'OBJECTT
YPE' 'INTERNALTYPE','VSE'.'DEACTIVATED','R'.'RESOURCEID','
R'.'RESOURCETYPE','R'.'OBJNAME','R'.'PATHURI','R'.'PATHID'
,'R'.'FULLPATHID','R'.'DISPLAYNAME','R'.'CREATORNAME','R'.
'CREATIONTIME','VSE'.'LASTMODIFIED' 'MODIFICATIONTIME','R'
.'CONTENTTYPE','R'.'CONTENTLENGTH','R'.'CONTENTLANGUAGE','
R'.'ISMASTERLANGUAGE','R'.'SOURCEURI','V'.'ISDELETED','R'.
'DOCUMENTTYPE','R'.'FORMATVERSION','R'.'DOCTYPESTATUS','R'
.'CONTENTSTOREID','R'.'FULLCONTSTOREID','R'.'CONTENTMD5','
R'.'TOUCHEDPROPERTY','CS'.'ISINDELTA','CS'.'DELTAALGORITHM
','CS'.'CONTENTBLOB','VSE'.'ACTIVATIONSEQNO'
FROM 'PVC_VSETELEMENT' 'VSE' INNER
JOIN 'DAV_RESOURCE' 'R' ON 'R'.'RESOURCEID'
= 'VSE'.'VERSIONID' INNER JOIN 'PVC_VERSIONEDOBJ' 'VO'
ON 'VSE'.'OBJECTID' = 'VO'.'OBJECTID' INNER
JOIN 'PVC_VERSION' 'V' ON 'VSE'.'VERSIONID'
= 'V'.'VERSIONID' LEFT OUTER JOIN 'DAV_CONTENTSTORE' 'CS'
ON 'CS'.'CONTENTSTOREID' = 'R'.'CONTENTSTOREID'
WHERE 'VSE'.'VERSIONSETID' = ? AND 'R'.'PATHURI' LIKE ?
ESCAPE ? AND 'R'.'OBJNAME' LIKE ? ESCAPE ?
AND 'VSE'.'LASTMODIFIED' > ? AND 'VSE'.'ACTIVATIONSEQNO'
> ? ORDER BY 7
DESC,9,3,","erp04.abs45team.de_ER4_7411650","0e1778e0bcb61
1daab59000c297d0581","CMS_USER","0","0","com.sap.sql_0003"
,"1","/System/Database/sql/jdbc/direct","","461","com.sap.
sql.jdbc.direct.DirectPreparedStatement","SAPEngine_Applic
ation_Thread[impl:3]_26","","CMS_USER",
regards,
Markus -
System/Query Performance: What to look for in these tcodes
Hi
I have been researching on system/query performance in general in the BW environment.
I have seen tcodes such as
ST02 :Buffer/Table analysis
ST03 :System workload
ST03N:
ST04 : Database monitor
ST05 : SQL trace
ST06 :
ST66:
ST21:
ST22:
SE30: ABAP runtime analysis
RSRT:Query performance
RSRV: Analysis and repair of BW objects
For example, Note 948066 provides descriptions of these tcodes but what I am not getting are thresholds and their implications. e.g. ST02 gave tune summary screen with several rows and columns (?not sure what they are called) with several numerical values.
Is there some information on these rows/columns such as the typical range for each of these columns and rows; and the acceptable figures, and which numbers under which columns suggest what problems?
Basically some type of a metric for each of these indicators provided by these performance tcodes.
Something similar to when you are using an Operating system, and the CPU performance is consistently over 70% which may suggest the need to upgrade CPU; while over 90% suggests your system is about to crush, etc.
I will appreciate some guidelines on the use of these tcodes and from your personal experience, which indicators you pay attention to under each tcode and why?
Thankshi Amanda,
i forgot something .... SAP provides Early Watch report, if you have solution manager, you can generate it by yourself .... in Early Watch report there will be red, yellow and green light for parameters
http://help.sap.com/saphelp_sm40/helpdata/EN/a4/a0cd16e4bcb3418efdaf4a07f4cdf8/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0f35bf3-14a3-2910-abb8-89a7a294cedb
EarlyWatch focuses on the following aspects:
· Server analysis
· Database analysis
· Configuration analysis
· Application analysis
· Workload analysis
EarlyWatch Alert a free part of your standard maintenance contract with SAP is a preventive service designed to help you take rapid action before potential problems can lead to actual downtime. In addition to EarlyWatch Alert, you can also decide to have an EarlyWatch session for a more detailed analysis of your system.
ask your basis for Early Watch sample report, the parameters in Early Watch should cover what you are looking for with red, yellow, green indicators
Understanding Your EarlyWatch Alert Reports
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4b88cb90-0201-0010-5bb1-a65272a329bf
hope this helps. -
GL - Global Intercompany System - Interface import
Hi experts,
I need some help with the GIS interface import.
I create records in gl_iea_interface, and run the Import Intercompany Transactions program. The program reports no errors, but I can't see any transaction lines being imported, just the transaction. When I look in the Enter Intercompany Transactions window, there is no lines, just the header and the clearing account.
And as you can see below, I set the line type to 'D' for distribution (offset) line and not 'C' for clearing line.
INSERT INTO GL_IEA_INTERFACE
(group_id, transaction_type_id, transaction_status_code, currency_code, gl_date, sender_subsidiary_id, receiver_subsidiary_id, line_type, -- Påkrævede felter
transaction_number, description, note, line_debit, line_credit, -- Valgfrie felter
sender_segment1, sender_segment2, sender_segment3, sender_segment16, sender_segment17, sender_segment18, sender_segment19, sender_segment20, sender_segment21, sender_segment22, sender_segment24 -- Valgfrie felter
VALUES
(1, 1, 'R', 'DKK', sysdate, 3465, 10, 'D',
'TEST0004', 'desc_test', 'note_test', 311.22, null,
'19420', '5110', '4031400000', '10000000', '119000000', '9900', '06455111999000', '10000000000', '10000', '10000000000', '0'
After import:
See screenshot: Screenshot
Please help me, Thanks
Bobby NielsenOS: SUN Solaris 9
DB: Oracle 10g (10.2.0.4.0)
EBS: E-Business Suite 11i (11.5.10.2)
Import Intercompany Transactions:
Log:
Finans: Version : 11.5.0 - Development
Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
GLIIMP module: Program - Importér koncerninterne transaktioner
Aktuel systemtid er 20-DEC-2010 13:16:19
->> gliimp() 20-DEC-2010 13:16:19
20-DEC-2010 13:16:19
GISI0001: Logfil for importprogram til Global Interfund System
GLIIMP
1
1
->> gliini() 20-DEC-2010 13:16:19
<< gliini() 20-DEC-2010 13:16:19
->> glilck() 20-DEC-2010 13:16:19
<< glilck() 20-DEC-2010 13:16:19
->> gliccs() 20-DEC-2010 13:16:19
<< gliccs() 20-DEC-2010 13:16:19
->> glicrt() 20-DEC-2010 13:16:19
->> gluddl() 20-DEC-2010 13:16:19
<< gluddl() 20-DEC-2010 13:16:19
->> gluddl() 20-DEC-2010 13:16:19
<< gluddl() 20-DEC-2010 13:16:20
<< glicrt() 20-DEC-2010 13:16:20
->> glipop() 20-DEC-2010 13:16:20
ind_sd_ccid = 0 sd_ccid = 760844
ind_rd_ccid = 0 rd_ccid = -1
->> gliccc() 20-DEC-2010 13:16:20
<< gliccc() 20-DEC-2010 13:16:20
<< glipop() 20-DEC-2010 13:16:20
->> glicbo() 20-DEC-2010 13:16:20
->> glugst() 20-DEC-2010 13:16:20
<< glugst() 20-DEC-2010 13:16:20
->> glugst() 20-DEC-2010 13:16:20
<< glugst() 20-DEC-2010 13:16:20
<< glicbo() 20-DEC-2010 13:16:20
->> gliins() 20-DEC-2010 13:16:20
<< gliins() 20-DEC-2010 13:16:21
->> glignr() 20-DEC-2010 13:16:21
<< glignr() 20-DEC-2010 13:16:24
->> gliver() 20-DEC-2010 13:16:24
<< gliver() 20-DEC-2010 13:16:24
->> glirep() 20-DEC-2010 13:16:24
<< glirep() 20-DEC-2010 13:16:24
->> gliclp() 20-DEC-2010 13:16:24
->> gluddl() 20-DEC-2010 13:16:24
<< gluddl() 20-DEC-2010 13:16:32
->> gluddl() 20-DEC-2010 13:16:32
<< gluddl() 20-DEC-2010 13:16:39
<< gliclp() 20-DEC-2010 13:16:39
GISI0003: Importprogram til Global Interfund System er fuldført fejlfrit.
<< gliimp() 20-DEC-2010 13:16:39
Start på log-meddelelser fra FND_FILE
Slut på log-meddelelser fra FND_FILE
Udfører anmodning om fuldførelsesindstillinger...
------------- 1) PRINT -------------
Udskriver output-fil.
Jobkø-ID : 3262523
Antal kopier : 0
Printer : noprint
Uførelse af anmodninger om fuldførelsesindstillinger er afsluttet.
Job i baggrundsprocessor fuldført fejlfrit
Aktuel systemtid er 20-DEC-2010 13:16:39
Out:
Importudførelsesrapport for Global Interfund System Dato: 20-DEC-10 13:16
Baggrundsanmodning-ID: 3262523 Side: 1
Status: Fuldført
Transaktionstype: Intern afregning Gruppe-ID: 1
Total transaktioner: 1 Total transaktionsfejl: 0
Total linjer: 1 Total linjefejl: 0
======================================================= Transaktioner oprettet =======================================================
Advarsel Transaktionsnummer Total linjer
WR02 TEST0006-IMP 1
======================================================== Transaktionsfejl ========================================================
Transaktionsnummer Afsender Modtager Valuta Dato i Finans Fejlkode
============================================================ Linjefejl ===========================================================
====================================================== Fejl- og advarselsnøgle =======================================================
Advarselskoder
WR01: Oprindelig status = APPROVED, Importeret status = NEW
WR02: Oprindelig status = REVIEW, Importeret status = NEW
Transaktionsniveaufejlkoder
TR01: Grupperingskriterier er forskellige.
TR02: Linjeniveaufejl fundet.
TR03: Mere end en clearinglinje til denne transaktion.
Linjeniveaufejlkoder
LN01: Ugyldig valutakode.
LN02: Ugyldig linjetype.
LN03: Både debet og kredit er udfyldt i denne linje.
LN04: Ugyldig datterselskab-ID.
LN05: Ugyldig transaktionsstatuskode.
LN06: Ugyldig dato i Finans for afsender.
LN07: Ugyldig dato i Finans for modtager.
LN08: Detaljepostering ikke tilladt i afsenderkonto.
LN09: Detaljepostering er ikke tilladt i modtagerkonto.
LN10: Afsenderkonto er en summeringskonto.
LN11: Modtagerkonto er en summeringskonto.
LN12: Afsenderkonto er deaktiveret.
LN13: Modtagerkonto er deaktiveret.
LN14: Afsenderkonto er ikke aktiv for specificeret dato i Finans.
LN15: Modtagerkonto er ikke aktiv for specificeret dato i Finans.
LN16: Fejl ved oprettelse af ny afsenderkodekombination (fleksfeltfejl).
LN17: Fejl ved oprettelse af ny modtagerkodekombination (fleksfeltfejl).
Importudførelsesrapport for Global Interfund System Dato: 20-DEC-10 13:16
Baggrundsanmodning-ID: 3262523 Side: 2
LN18: Afsender- og modtagersegmenter er null.
LN19: Segmentværdien for afsenderfirma matcher ikke firmaværdien for det afsendende datterselskab.
LN20: Segmentværdien for modtagerfirma matcher ikke firmaværdien for det modtagende datterselskab.
***** Slut på rapport ***** -
In Oracle 10g global support system,I have set parameter of NLS_DATE_FORMAT.I am wondering about that whether it could affect the default value of NLS_TIMESTAMP_FORMAT?
Message was edited by:
frank.qiantest@ORCL> select sysdate from dual;
SYSDATE
24-NOV-06
Elapsed: 00:00:00.00
test@ORCL> DECLARE
2 checkout TIMESTAMP(3);
3 BEGIN
4 checkout := '22-JUN-2004 07:48:53.275';
5 DBMS_OUTPUT.PUT_LINE( TO_CHAR(checkout));
6 END;
7 /
22-JUN-04 07.48.53.275 AM
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
test@ORCL> alter session set nls_date_format="MM/DD/YYYY";
Session altered.
Elapsed: 00:00:00.00
test@ORCL> select sysdate from dual;
SYSDATE
11/24/2006
Elapsed: 00:00:00.00
test@ORCL> DECLARE
2 checkout TIMESTAMP(3);
3 BEGIN
4 checkout := '22-JUN-2004 07:48:53.275';
5 DBMS_OUTPUT.PUT_LINE( TO_CHAR(checkout));
6 END;
7 /
22-JUN-04 07.48.53.275 AM
PL/SQL procedure successfully completed.
test@ORCL> alter session set NLS_TIMESTAMP_FORMAT = 'DD/MM/YYYY HH:MI:SS.FF';
Session altered.
Elapsed: 00:00:00.00
test@ORCL> DECLARE
2 checkout TIMESTAMP(3);
3 BEGIN
4 checkout := '22-JUN-2004 07:48:53.275';
5 DBMS_OUTPUT.PUT_LINE( TO_CHAR(checkout));
6 END;
7 /
22/06/2004 07:48:53.275
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
test@ORCL>
test@ORCL> -
Production Order Report - System Query
The Production Order Report in v2005A SP01 is not implemented in v2007A.
This report is important to some of our B1 user customers.
If the 2005A report is copied and pasted into the Query Generator in 2007A, it appears to work correctly, but the risk is that:-
(a) table changes which have been made have effects which aren't apparent on normal inspection, and
(b) it will eventually be lost during upgrades and users will have to recreate it themselves, for which many don't have the expertise.
Please verify that the functionality of the 2005A query remains correct in 2007A, and reinstate it as a system query in a future release.
Regards,
Mike BurmeisterHi,
In SAP B1 2007A SP01 PL10, there is a change log. You can check all the differences made by user in the pass in the change log.
the table is AWOR. The product no. is taken from OITM. You can use both table in order to find out the comparison
JimM -
Sun Cluster 3.2 - Global File Systems
Sun Cluster has a Global Filesystem (GFS) that supports read-only access throughout the cluster. However, only one node has write access.
In Linux a GFS filesystem allows it to be mounted by multiple nodes for simultaneous READ/WRITE access. Shouldn't this be the same for Solaris as well..
From the documentation that I have read,
"The global file system works on the same principle as the global device feature. That is, only one node at a time is the primary and actually communicates with the underlying file system. All other nodes use normal file semantics but actually communicate with the primary node over the same cluster transport. The primary node for the file system is always the same as the primary node for the device on which it is built"
The GFS is also known as Cluster File System or Proxy File system.
Our client believes that they can have their application "scaled" and all nodes in the cluster can have the ability to write to the globally mounted file system. My belief was, the only way this can occur is when the application has failed over and then the "write" would occur from the "primary" node whom is mastering the application at that time. Any input will be greatly appreciated or clarification needed. Thanks in advance.
RyanThank you very much, this helped :)
And how seamless is remounting of the block device LUN if one server dies?
Should some clustered services (FS clients such as app servers) be restarted
in case when the master node changes due to failover? Or is it truly seamless
as in a bit of latency added for duration of mounting the block device on another
node, with no fatal interruptions sent to the clients?
And, is it true that this solution is gratis, i.e. may legally be used for free
unless the customer wants support from Sun (authorized partners)? ;)
//Jim
Edited by: JimKlimov on Aug 19, 2009 4:16 PM -
Cannot See DC's in Cons system
Hi,
I copied all my DC's from a track 1 to track 2.
Track 1 has Sofware component ABC version 10 and Track 2 has Sofware component ABC version 20.
I followed this wiki to acheive this,
http://wiki.sdn.sap.com/wiki/display/JDI/StepbyStepguidetoMoveaDCfromoneSCtoanother
Everything got deployed fine in Dev system of Track 2. But now when I import the activities in Cons system, I cannot see any DC's deployed.
Import log shows successfull for all activities. Not sure why the DC's are not deployed in Cons system.
Also : I did not had cons system maintained in track runtime until now. I maintained cons runtime just before starting the import in Consolidation tab. Hope that is not causing a problem
Any idea of what should be next steps.
Thanks,
Yomesh.Hi Pascal,
That was a helpful link. I think the deployments are stuck as I see following message in the log,
status = notification still not executed by CBS.
Need to dig deep to onderstand why is it not deployed. But why is Import status shown as success if it is not yet deployed.
Thanks,
Yomesh. -
Transport management and Global trade system data to BI
Hi All,
We are trying to Extract Transport management (TM) and Global trade system (GTS) data into BI. Can you please let me know the standard Data sources and the process to extract the data.
We are implementing EM, GTS and TM. Please let me know the process to extract the data into BI
Regards
Sathish.Hi All,
I'm able to extract GTS data Now.
Can any one please let me know how to extract Transport management data in to BI.
Regards
Sathish. -
Problems mounting global file system
Hello all.
I have setup a Cluster using two Ultra10 machines called medusa & ultra10 (not very original I know) using Sun Cluster 3.1 with a Cluster patch bundle installed.
When one of the Ultra10 machines boots it complains about being unable to mount the global file system and for some reason tries to mount the node@1 file system when it is actually node 2.
on booting I receive the message on the macine ultra10
Type control-d to proceed with normal startup,
(or give root password for system maintenance): resuming boot
If I use control D to continue then the following happens:
ultra10:
ultra10:/ $ cat /etc/cluster/nodeid
2
ultra10:/ $ grep global /etc/vfstab
/dev/md/dsk/d32 /dev/md/rdsk/d32 /global/.devices/node@2 ufs 2 no global
ultra10:/ $ df -k | grep global
/dev/md/dsk/d32 493527 4803 439372 2% /global/.devices/node@1
medusa:
medusa:/ $ cat /etc/cluster/nodeid
1
medusa:/ $ grep global /etc/vfstab
/dev/md/dsk/d32 /dev/md/rdsk/d32 /global/.devices/node@1 ufs 2 no global
medusa:/ $ df -k | grep global
/dev/md/dsk/d32 493527 4803 439372 2% /global/.devices/node@1
Does anyone have any idea why the machine called ultra10 of node ID 2 is trying to mount the node ID 1 global file system when the correct entry is within the /etc/vfstab file?
Many thanks for any assistance.Hmm, so for arguments sake, if I tried to mount both /dev/md/dsk/d50 devices to the same point in the filesystem for both nodes, it would mount OK?
I assumed the problem was because the device being used has the same name, and was confusing the Solaris OS when both nodes tried to mount it. Maybe some examples will help...
My cluster consists of two nodes, Helene and Dione. There is fibre-attached storage used for quorum, and website content. The output from scdidadm -L is:
1 helene:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
2 helene:/dev/rdsk/c0t1d0 /dev/did/rdsk/d2
3 helene:/dev/rdsk/c4t50002AC0001202D9d0 /dev/did/rdsk/d3
3 dione:/dev/rdsk/c4t50002AC0001202D9d0 /dev/did/rdsk/d3
4 dione:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 dione:/dev/rdsk/c0t1d0 /dev/did/rdsk/d5
This allows me to have identical entries in both host's /etc/vfstab files. There are also shared devices under /dev/global that can be accessed by both nodes. But the RAID devices are not referenced by anything from these directories (i.e. there's no /dev/global/md/dsk/50). I just thought it would make sense to have the option of global meta devices, but maybe that's just me!
Thanks again Tim! :D
Pete -
Global Consolidation System setup
Hi all,
I am new in Global Consolidation System, our client need to consolidate two new entities to the consolidated sob. But not fully 100% of the account balance. The allocation is as follows:
The holding company holds 75% of entity A and 55% of entity B. So, how can I set this percentage in the consolidation setup?
Please help.
SunnyHi everybody!
I have the same problem.
What do we choice?
o BPC (Ex Outlooksoft)
o Financial Consolidation (Ex Cartesis)
Thanks -
Unable to remount the global file system
Hello All,
I am facing problem when i am remounting the global file system on one of the nodes in cluster.
Here are my system details:
OS: SunOS sf44buce02 5.10 Generic_141414-01 sun4u sparc SUNW,Sun-Fire-V440
SunCluster Version:3.2
The problem details:
The following entry i have in my /etc/vfstab file
dev/md/cfsdg/dsk/d10 /dev/md/cfsdg/rdsk/d10 /global/TspFt ufs 2 yes global,logging
and now i wanted to add "nosuid" option to the global file system. I have used the following command to add but i couldn't succeed.
# mount -o nosuid,remount /global/TspFt i am getting the following error
mount: Operation not supported
mount: Cannot mount /dev/md/cfsdg/dsk/d10
can anyone tell me How to remount the global file system without reboot?
Thanks in advance.
Regards,
RajeshwarHi,
Thank you very much for the reply. Please see the below details that you have asked:
-> The volume manager i am using is *"SUN"*.
-> In my previous post i missed "*/*" while pasting the vfstab entry. Please have a look at the below vfstab entry.
*/dev/md/cfsdg/dsk/d10 /dev/md/cfsdg/rdsk/d10 /global/TspFt ufs 2 yes global,logging,nosuid,noxattr*
- Output of ls -al /dev/md/
root@sf44buce02> ls -al /dev/md/
total 34
drwxr-xr-x 4 root root 512 Jun 24 16:37 .
drwxr-xr-x 21 root sys 7168 Jun 24 16:38 ..
lrwxrwxrwx 1 root root 31 Jun 3 20:19 admin -> ../../devices/pseudo/md@0:admin
lrwxrwxrwx 1 root root 8 Jun 24 16:37 arch1dg -> shared/2
lrwxrwxrwx 1 root other 8 Jun 3 22:26 arch2dg -> shared/4
lrwxrwxrwx 1 root root 8 Jun 24 16:37 cfsdg -> shared/1
drwxr-xr-x 2 root root 1024 Jun 3 22:41 dsk
lrwxrwxrwx 1 root other 8 Jun 3 22:27 oradg -> shared/5
drwxr-xr-x 2 root root 1024 Jun 3 22:41 rdsk
lrwxrwxrwx 1 root root 8 Jun 24 16:37 redodg -> shared/3
lrwxrwxrwx 1 root root 42 Jun 3 22:02 shared -> ../../global/.devices/node@2/dev/md/shared
- output of ls -al /dev/md/cfsdg/
root@sf44buce02> ls -al /dev/md/cfsdg/
total 8
drwxr-xr-x 4 root root 512 Jun 3 22:29 .
drwxrwxr-x 7 root root 512 Jun 3 22:29 ..
drwxr-xr-x 2 root root 512 Jun 24 16:37 dsk
drwxr-xr-x 2 root root 512 Jun 24 16:37 rdsk
- output of ls -la /dev/md/cfsdg/dsk/.
root@sf44buce02> ls -al /dev/md/cfsdg/dsk
total 16
drwxr-xr-x 2 root root 512 Jun 24 16:37 .
drwxr-xr-x 4 root root 512 Jun 3 22:29 ..
lrwxrwxrwx 1 root root 42 Jun 24 16:37 d0 -> ../../../../../devices/pseudo/md@0:1,0,blk
lrwxrwxrwx 1 root root 42 Jun 24 16:37 d1 -> ../../../../../devices/pseudo/md@0:1,1,blk
lrwxrwxrwx 1 root root 43 Jun 24 16:37 d10 -> ../../../../../devices/pseudo/md@0:1,10,blk
lrwxrwxrwx 1 root root 43 Jun 24 16:37 d11 -> ../../../../../devices/pseudo/md@0:1,11,blk
lrwxrwxrwx 1 root root 42 Jun 24 16:37 d2 -> ../../../../../devices/pseudo/md@0:1,2,blk
lrwxrwxrwx 1 root root 43 Jun 24 16:37 d20 -> ../../../../../devices/pseudo/md@0:1,20,blk -
Financial Consolidation Hub (FCH) vs Global Consolidation System (GCS).
Does any one know the benefits offered by Financial Consolidation Hub (FCH) over Global Consolidation System (GCS).
GCS comes with GL module and it is pretty sophisticated. My question is what are the benefits offered by FCH module (which is new in R12) over GCS?
ThanksFound a web ADI for entering historical rates in FCH.
responsibility fch>>setup>>rates>>create historical rates
but is there any other way to enter historical rates, any API or interface table ??
Maybe you are looking for
-
How do I apply a linear gradient to an S shape?
Hi I recently created an S shape with the pen tool that is quite large in my photoshop cs4. I would like to know if there is a way for me to apply a linear gradient so that it follows the path of the S shape perfectly. The effect should cause one of
-
Virtual PC 7.0 F key & switching problems
Hello my fellow mac-geeks! Problem: I'm trying to run the following software from my mac, but using Virtual PC (because it's PC based): PokerStars. Way before I get to the point of running software I've encountered a big problem. In setting up the Vi
-
HT201321 Finder flashing, "Window Restore", desktop problems
I've never seen this happen before. My finder icon bounces in the dock while a window with this warning is up, "Window Restore problem would you like to restore or not restore?" Since it is flashing on and off you can't click either button. What we f
-
Digest Credential in CUCM Application user
Hi All, Please somebody share importance of digest credential in CUCM application user. Kindly let me know if I am not giving any credential for this, will it impact anything. Also If I am giving some credential, do i need to update anywhere in CUCM.
-
Having trouble updating Safari after reverting to older version
Hi. I feel really dumb for doing this and I won't bore you with why, but I downloaded Safari 1.2 and replaced version 1.3.2 with it. Now when I try to update with Safari 1.3.1, the installer won't let me use my hard drive (when the screen comes up to