Documenting means

Hi Guru's,
   In Support Projects
       " Documenting the configuration and program modification requirements and developing specifications for customizing "
Gurus what does it mean actually.
   Plz tell me Gurus its also very urgent for me.
Thanks in Advance,
Abdul Khader.

Hi Abdul,
In Support projects, you have to document the changes you have made in the production system. Basically there are three types of support issues - Configuraition change, Development/ enhancement and Routine support.
Configuration - Means carrying ot customizing settings like creating payment terms, or making changes to Credit rep etc
Development / Enhancement - Any custom program (Z progra) needs to be modified, which will be treated as a development issue
Both configuraiton and development issues require documnetation in the form of what business requirement, changes proposed, Test results (Before and after scenario), user approval of the testing etc. Additionally for a dev issue, you have to document the FTS (Functional tech specs doc)
Support - Does not involve any documenting. It will have only an analysis provided to the business. To further your analysis made, you might have to document the screenshots.
Hope this clarifies
Kindly assign points if helpful
regards,
radhika

Similar Messages

  • "Online documentation" customer specific guideline during data entry inHFM

    Hi Community,
    I am looking for solutions:
    A customer wants to set up something like an "online documentation", meaning when entering data on an account in HFM they want to “jump” into the
    Descriptions, Booking examples of their accounting guideline which should be stored as document or a kind of online text.
    Currently they are having a .pdf (their IFRS Guideline) in workspace, but they want to make it more interactive for their users like e.g. a online docu
    or jumping in a document directly to a specific position of their guideline.
    Do you have experiences from the field on how other customers have solved this?
    Thank you very much in advance!

    hello
    The master record of a customer account cannot be archived immediately. The system first has to check whether the following requirements have been met:
    The account must not contain any transaction figures in the system. Transaction figures from previous years that have not been archived will also prevent the system from deleting the account master record.
    The account must be marked for deletion in its master record.
    You should block an account for posting before you mark it for deletion. The only effect this deletion flag has is to cause a warning to be issued every time you subsequently try to post to this account.
    You can delete the test data before going live with your system
    Reg
    *assign points if useful

  • I do not understand the documentation, can anyone explain?

    I am trying to create workbooks for an EUL version 5 in a database version 7.1.3.
    According to the documentation on http://download-uk.oracle.com/docs/html/A90881_02/rdb_supp.htm#1007051 Discoverer Administrator and Desktop 9.0.2 are supported against a 7.1 database but when I try to connect using the Desktop I get a ora-00922 error.
    Metalink only has one entry from 2001, which does not offer a solution, only a suggestion to create a TAR.
    When I read the sections on features that are not supported, I see that Discoverer EUL5 workbooks are not supported in oracle RDBs but I don't understand what that means. I would expect these features to be unsupported in NON-oracle databases, instead of oracle databases.
    Who can either provide a solution for the 00922 message or explain to me what the documentation means?
    Thanks in advance.

    ORA-00922: missing or invalid option
    Cause: An invalid option was specified in defining a column or storage clause. The valid option in specifying a column is NOT NULL to specify that the column cannot contain any NULL values. Only constraints may follow the datatype. Specifying a maximum length on a DATE or LONG datatype also causes this error.
    Action: Correct the syntax. Remove the erroneous option or length specification from the column or storage specification

  • I have a problem with my ipad. I can make online documentation before but when I updated my iOS, I can't longer make notes. How can I fix this problem?

    I have a problem with my ipad. Before I can make online documentation but when I updated my iOS, I can no longer make notes. How can I fix this?

    What does "online documentation" mean exactly?  Are you using the Notes app to try to create notes in iCloud, for example?
    If so, that should still work in iOS 7.1.
    If not, please tell us specific symptoms of what is (or is not) happening.

  • Recurring Journal Entry Attaching Supporting Documentation

    Can someone tell me how to attach supporting documentation to a recurring journal so that it will show on each of the monthly entries. Similiar to what can be done on a manual journal.

    Hello,
    Supporting documentation means you mean to have long notes for a entry or any word document  ??
    For long notes you can use Long text field.
    Later case Standard sap dont give any option unless you go for any Z Program wherein
    you can have this kind of functionality. 
    Below is a link which talks about linking Document in Sales order.
    How to Attach Document in Z Transaction
    Hope this helps

  • Problem with FMIS 4 and streaming of live events

    We have a problem on our platform and its driving us nuts... no seriously... NUTS.
    We have triple checked every possible component from a hardware level up to a software configuration level.
    The problem :  Our platform consists of 2 origin servers with 6 edges talking to them (really beefy hardware).  Once we inject a live stream into our two origins... we can successfully get the stream out via the edges and stream it successfully via our player.  Once we hit around 2200 concurrent connections, the FMIS servers drops all the connections busy with streams.   From the logs the only thing we can see is the following - Tons of disconnects with the Status code 103's which according to the online documentation means Client disconnected due to server shutdown (or application unloaded).
    We simulated the scenario with the FMS load simulator utility... and we start seeing errors + all connections dropped around the 2200 mark.
    The machines are Dell blades with dual CPU Xeons (quad cores) with around 50 gigs of ram per server... The edges are all on 10 Gb/s ethernet interfaces as well. 
    We managed to generate a nice big fat coredump on the one origin and the only thing visible from inspecting the core dumps + logs is the following :
    2011-10-05 
    15:44:10   
    22353   (e)2641112 
    JavaScript runtime is out of memory; server shutting down instance (Adaptor:
    _defaultRoot_, VHost: _defaultVHost_, App: livestreamcast_origin/_definst_). Check the JavaScript runtime size for this application
    in the configuration file.
    And from the core dump :
    warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7fff9ddfc000
    Core was generated by `/opt/adobe/fms/fmscore -adaptor _defaultRoot_ -vhost _defaultVHost_ -app -inst'.
    Program terminated with signal 11, Segmentation fault.
    #0  0x00002aaaab19ab22 in js_MarkGCThing () from /opt/adobe/fms/modules/scriptengines/libasc.so
    (gdb) bt
    #0  0x00002aaaab19ab22 in js_MarkGCThing () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #1  0x00002aaaab196b63 in ?? () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #2  0x00002aaaab1b316f in js_Mark () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #3  0x00002aaaab19a673 in ?? () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #4  0x00002aaaab19a6f7 in ?? () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #5  0x00002aaaab19ab3d in js_MarkGCThing () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #6  0x00002aaaab19abbe in ?? () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #7  0x00002aaaab185bbe in JS_DHashTableEnumerate () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #8  0x00002aaaab19b39d in js_GC () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #9  0x00002aaaab17e6d7 in js_DestroyContext () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #10 0x00002aaaab176bf4 in JS_DestroyContext () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #11 0x00002aaaab14f5e3 in ?? () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #12 0x00002aaaab14fabd in JScriptVMImpl::resetContext() () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #13 0x00002aaaab1527b4 in JScriptVMImpl::postProcessCbk(unsigned int, bool, int) ()
       from /opt/adobe/fms/modules/scriptengines/libasc.so
    #14 0x00002aaaab1035c7 in boost::detail::function::void_function_obj_invoker3<boost::_bi::bind_t<void, boost::_mfi::mf3<void, IJScriptVM, unsigned int, bool, int>, boost::_bi::list4<boost::_bi::value<IJScriptVM*>, boost::arg<1>, boost::arg<2>, boost::arg<3> > >, void, unsigned int, bool, int>::invoke(boost::detail::function::function_buffer&, unsigned int, bool, int) ()
       from /opt/adobe/fms/modules/scriptengines/libasc.so
    #15 0x00002aaaab0fddf6 in boost::function3<void, unsigned int, bool, int>::operator()(unsigned int, bool, int) const ()
       from /opt/adobe/fms/modules/scriptengines/libasc.so
    #16 0x00002aaaab0fbd9d in fms::script::AscRequestQ::run() () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #17 0x00002aaaab0fd0eb in boost::detail::function::function_obj_invoker0<boost::_bi::bind_t<bool, boost::_mfi::mf0<bool, fms::script::AscRequestQ>, boost::_bi::list1<boost::_bi::value<fms::script::IntrusivePtr<fms::script::AscRequestQ> > > >, bool>::invoke(boost::detail::function::function_buffer&) () from /opt/adobe/fms/modules/scriptengines/libasc.so
    #18 0x00000000009c7327 in boost::function0<bool>::operator()() const ()
    #19 0x00000000009c7529 in fms::script::QueueRequest::run() ()
    #20 0x00000000008b868a in TCThreadPool::launchThreadRun(void*) ()
    #21 0x00000000008b8bd6 in TCThreadPool::__ThreadStaticPoolEntry(void*) ()
    #22 0x00000000008ba496 in launchThreadRun(void*) ()
    #23 0x00000000008bb44f in __TCThreadEntry(void*) ()
    #24 0x000000390ca0673d in start_thread () from /lib64/libpthread.so.0
    #25 0x000000390bed44bd in clone () from /lib64/libc.so.6
    From what it looks like above, FMS is hard crashing when trying to use clone(2) (basically it means when its trying to spawn a new/another process).
    I am really hoping there is someone out there who can guide us in the right direction with regards to how we can pinpoint why our platform cannot cope with a pathetic 2200 connections before the FMIS daemon drops all connected streams.
    There has to be someone out there that has run into this or a similiar problem like this...  HELP !!!!
    Any feedback / ideas would be greatly appreciated.

    Thank you very much for the reply :-)
    We have been fiddling with the platform on many levels yesterday, and one thing we did do was bump that value up from 1024 to 8192... This made a HUGE improvement in ensuring the platform now holds the live streaming connections. (up to 8000 per edge)
    I think for other future reference and to aid other people that might run into this problem in the future, its a good idea to increase this value.  From what we have seen, read and heard, that default value is fairly conservative, its suppose to grow when the load demands it, however, if you have a large scale of connections coming in at once from multiple locations, it can happen that it grows too quickly which can result in the application to be reloaded (which disconnects all users, i.e. all edge servers connected to this origin).
    Another option we were recommended to modify was the following :
    In adaptor.xml you currently have this line:
    <HostPort name="edge1" ctl_channel="localhost:19350" rtmfp="${ADAPTOR.HOSTPORT}">${ADAPTOR.HOSTPORT}</HostPort>
    You can set this to
                    <HostPort name="edge1" ctl_channel="localhost:19350" rtmfp=":80">:80</HostPort>
    <HostPort name="edge2" ctl_channel="localhost:19351" rtmfp=":1935">:1935</HostPort>
    This will create two edge processes for both ports 80 and 1935. Currently both ports are served from the same fmsedge process.
    Of course this is not a huge performance improvement, but should further distribute load over more processes which is a good thing. Especially when there are that many connections.
    This setting can be made on all machines (origin + edge)
    Hopefully this could help other people also running into the same problems we have seen ...

  • The definition of the READ RFC to the managed system could not be found. Pl

    Hi there,
    i try to setup a managed system on our Solution manager for E2E Analysis... but the Setup gave me a sort of errors
    The definition of the READ RFC to the managed system could not be found. Please make sure to run the RFC creation assistant in SMSY, as described in setup documentation
    and
    No RFC Read User (SOLMANSOX001 or SM_SID) was found at ABAP Host=HOST sys=00 client=001 for roles assignment. Please make sure you have run the RFC creation assistant for this system, in SMSY. If a user name different  from the default 'SM_<SID>' or 'SOLMAN<SID><CLIENT>' was specified in the SMSY RFC creation assistant, you need to assign by hand the following roles & profiles to the Read RFC User :  SAP_SATELLITE_E2E (role), S_AI_SMD_E2E (profile)
    So which documentation means the setup where is described how to create the RFC Destinations with SMSY.. Thanks
    Regards
    Edited by: Bjoern Bayerschmidt on May 17, 2010 5:45 PM

    Hello,
    The documentation is located at [Help|http://help.sap.com/saphelp_smehp1/helpdata/en/b3/64c33af662c514e10000000a114084/frameset.htm]
    Then navigate Basic Settings > Solution Manager System Landscape >  Create Systems > Generate RFC Connections
    You can:
    Assign an existing RFC destination to a client with Assign and Check RFC Destinations.
    Generate RFC destinations for your managed systems
    Change existing RFC connections.
    Delete RFC destinations.
    Resolve RFC Generation Errors
    I hope you find this information helpful.
    Regards,
    Paul

  • Dg4odbc, unixODBC, freeTDS - connection problems to MS SQL 2008

    I am trying to set up a database link between my 64bit Oracle 11g running on CentOS 6.2 and my MS SQL 2008 server running on MS Windows server 2003. I have installed the following -
    freeTDS - version 0.91 (64 bit)
    unixODBC - version 2.3.1 (64 bit)
    I have successfully configured ODBC and freeTDS so that I can connect using isql and query my MSSQL database. The problem I am having is connecting Oracle to MSSQL, I am sure it is a simple configuration error but I have been going round in circles with this and hope someone can help!
    freetds.conf
    [global]
    timeout = 10
    connect timeout = 10
    text size = 64512
    [CERM]
    host = 192.168.xxx.xxx
    port = 1101
    tds version = 7.2
    instance = SSLSQLDB
    dump file = /tmp/dump.log
    odbc.ini
    [ODBC Data Sources]
    CERM=TDS connection
    [CERM]
    Servername = CERM
    Description = TDS connection
    Driver = /usr/local/lib/libtdsodbc.so
    UsageCount = 1
    FileUsage = 1
    [ODBC]
    Trace=255
    odbcinst.ini
    [TDS]
    Description = FreeTDS driver for MS SQL
    Driver = /usr/local/lib/libtdsodbc.so
    Setup = /usr/lib64/libtdsS.so
    Trace = Yes
    TraceFile = /tmp/freetd.log
    FileUsage = 1
    [FreeTDS]
    Description = FreeTDS driver for MS SQL
    Driver = /usr/local/lib/libtdsodbc.so
    Setup = /usr/lib64/libtdsS.so
    Trace = Yes
    TraceFile = /tmp/freetd.log
    FileUsage = 1
    (Because I have put the actual path to the driver in the odbc.ini file I don;t believe the odbcinst.ini file is actually used)
    inithscerm.ora
    # This is a sample agent init file containing the HS parameters that
    # are needed for an ODBC Agent.
    # HS init parameters
    HS_FDS_CONNECT_INFO=CERM
    HS_FDS_TRACE_LEVEL=255
    #HS_FDS_TRACE_FILE_NAME = /tmp/hsodbcsql.trc
    HS_FDS_SHAREABLE_NAME=/usr/local/lib/libodbc.so
    HS_FDS_SUPPORT_STATISTICS=FALSE
    set ODBCINI=/usr/local/etc/odbc.ini
    (my odbc.ini file is located in /usr/local/etc)
    listener.ora
    # listener.ora Network Configuration File: /usr/oracle/product/network/admin/listener.ora
    # Generated by Oracle configuration tools.
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = ssl-oracle.domain)(PORT = 1521))
    ADR_BASE_LISTENER = /usr/oracle
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC=
    (SID_NAME=hscerm)
    (ORACLE_HOME=/usr/oracle/product)
    (PROGRAM=dg4odbc)
    (ENVS=LD_LIBRARY_PATH = /usr/local/lib:$ORACLE_HOME/lib)
    (SID_DESC=
    (SID_NAME=PROD)
    (ORACLE_HOME=/usr/oracle/product)
    tnsnames.ora
    # tnsnames.ora Network Configuration File: /usr/oracle/product/network/admin/tnsnames.ora
    # Generated by Oracle configuration tools.
    PROD =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ssl-oracle.domain)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = PROD.DOMAIN)
    hscerm=
    (DESCRIPTION=
    (ADDRESS=(PROTOCOL=TCP)(HOST=ssl-oracle.domain)(PORT=1521))
    (CONNECT_DATA= (SID=hscerm))
    (HS=OK)
    right - I can tnsping my hscerm instance and that returns ok so I'm fairly sure the configuration is fine for both tnsnames.ora and listener.ora. I can isql connect to the ODBC defined name for the the MSSQL database. but when I create a database link in Oracle and then test it I get the following trace output :-
    [ODBC][14030][1339512618.356535][SQLSetConnectAttrW.c][332]
    Entry:
    Connection = 0x2054640
    Attribute = SQL_ATTR_AUTOCOMMIT
    Value = (nil)
    StrLen = -5
    [ODBC][14030][1339512618.356616][SQLSetConnectAttrW.c][616]
    Exit:[SQL_SUCCESS]
    [ODBC][14030][1339512618.356984][SQLDriverConnectW.c][290]
    Entry:
    Connection = 0x2054640
    Window Hdl = (nil)
    Str In = [DNCR;I=APDagj20][length = 30]
    Str Out = 0x2053408
    Str Out Max = 1024
    Str Out Ptr = 0x7fff6d305770
    Completion = 0
    [ODBC][14030][1339512618.357030][SQLDriverConnectW.c][500]Error: IM002
    [ODBC][14030][1339512618.357115][SQLGetDiagRecW.c][508]
    Entry:
    Connection = 0x2054640
    Rec Number = 1
    SQLState = 0x7fff6d3053d0
    Native = 0x7fff6d3051c4
    Message Text = 0x7fff6d3051d0
    Buffer Length = 510
    Text Len Ptr = 0x7fff6d305420
    [ODBC][14030][1339512618.357153][SQLGetDiagRecW.c][550]
    Exit:[SQL_SUCCESS]
    SQLState = IM002
    Native = 0x7fff6d3051c4 -> 0
    Message Text = [[unixODBC][Driver Manager]Data source name not found, and no default driver specified]
    [ODBC][14030][1339512618.357197][SQLGetDiagRecW.c][508]
    Entry:
    Connection = 0x2054640
    Rec Number = 2
    SQLState = 0x7fff6d3053d0
    Native = 0x7fff6d3051c4
    Message Text = 0x7fff6d3051d0
    Buffer Length = 510
    Text Len Ptr = 0x7fff6d305420
    [ODBC][14030][1339512618.357228][SQLGetDiagRecW.c][550]
    Exit:[SQL_NO_DATA]
    [ODBC][14030][1339512618.357291][SQLDisconnect.c][208]
    Entry:
    Connection = 0x2054640
    [ODBC][14030][1339512618.357321][SQLDisconnect.c][237]Error: 08003
    [ODBC][14030][1339512618.357387][SQLFreeHandle.c][284]
    Entry:
    Handle Type = 2
    Input Handle = 0x2054640
    Now I can clearly see the error "Data source name not found, and no default driver specified" which according to all the documentation means that the entry HS_FDS_CONNECT_INFO=CERM does not match the entry in my odbc.ini file ([CERM]) but for the life of me I can;t see why they don;t match ??
    Any help greatly received.

    Yeah I verified with isql but I have changed the odbc.ini file as you suggested -
    [root@ssl-oracle ~]# more /usr/local/etc/odbc.ini
    [ODBC Data Sources]
    CERM=TDS connection
    [CERM]
    Server = 192.168.xxx.xxx
    Driver = /usr/local/lib/libtdsodbc.so
    Database = sqlb00
    Port = 1101
    TDS_Version = 8.0
    QuotedId = YES
    [ODBC]
    Trace=255
    [root@ssl-oracle admin]# more inithscerm.ora
    # This is a sample agent init file containing the HS parameters that
    # are needed for an ODBC Agent.
    # HS init parameters
    HS_FDS_CONNECT_INFO=CERM
    HS_FDS_TRACE_LEVEL=255
    #HS_FDS_TRACE_FILE_NAME = /tmp/hsodbcsql.trc
    HS_FDS_SHAREABLE_NAME=/usr/local/lib/libodbc.so
    HS_FDS_SUPPORT_STATISTICS=FALSE
    HS_LANGUAGE=american_america.we8mswin1252
    HS_NLS_NCHAR=UCS2
    set ODBCINI=/usr/local/etc/odbc.ini
    [root@ssl-oracle admin]# osql -S CERM -U sa -P supersecretpassword
    checking shared odbc libraries linked to isql for default directories...
    trying /tmp/sql ... no
    trying /tmp/sql ... no
    trying /usr/loc ... no
    trying /tmp/sql.log ... no
    trying /home ... no
    trying /.odbc.ini ... no
    trying /usr/local/etc ... OK
    checking odbc.ini files
    cannot read "/root/.odbc.ini"
    reading /usr/local/etc/odbc.ini
    [CERM] found in /usr/local/etc/odbc.ini
    found this section:
    [CERM]
    Server = 192.168.xxx.xxx
    Driver = /usr/local/lib/libtdsodbc.so
    Database = sqlb00
    Port = 1101
    TDS_Version = 8.0
    QuotedId = YES
    looking for driver for DSN [CERM] in /usr/local/etc/odbc.ini
    found driver line: " Driver = /usr/local/lib/libtdsodbc.so"
    driver "/usr/local/lib/libtdsodbc.so" found for [CERM] in odbc.ini
    found driver named "/usr/local/lib/libtdsodbc.so"
    /usr/local/lib/libtdsodbc.so is an executable file
    "Server" found, not using freetds.conf
    Server is "192.168.xxx.xxx"
    looking up hostname for ip address 192.168.xxx.xxx
    Configuration looks OK. Connection details:
    DSN: CERM
    odbc.ini: /usr/local/etc/odbc.ini
    Driver: /usr/local/lib/libtdsodbc.so
    Server hostname: ssl-database.domain
    Address: 192.168.xxx.xxx
    Attempting connection as sa ...
    + isql CERM sa supersecretpassword -v
    | Connected! |
    | |
    | sql-statement |
    | help [tablename] |
    | quit |
    | |
    SQL>
    sql.log
    [ODBC][31473][1339581606.488571][SQLSetConnectAttr.c][396]
    Entry:
    Connection = 0x26c2a30
    Attribute = SQL_ATTR_AUTOCOMMIT
    Value = (nil)
    StrLen = -5
    [ODBC][31473][1339581606.488638][SQLSetConnectAttr.c][681]
    Exit:[SQL_SUCCESS]
    [ODBC][31473][1339581606.488924][SQLDriverConnect.c][726]
    Entry:
    Connection = 0x26c2a30
    Window Hdl = (nil)
    Str In = [DSN=CERM;UID=SA;PWD=**********][length = 30]
    Str Out = 0x26c4b18
    Str Out Max = 1024
    Str Out Ptr = 0x7fff12846560
    Completion = 0
    UNICODE Using encoding ASCII 'ISO8859-1' and UNICODE 'UCS-2LE'
    DIAG [01000] [FreeTDS][SQL Server]Unknown host machine name.
    DIAG [08001] [FreeTDS][SQL Server]Unable to connect to data source
    [ODBC][31473][1339581606.491276][SQLDriverConnect.c][1353]
    Exit:[SQL_ERROR]
    [ODBC][31473][1339581606.491358][SQLGetDiagRec.c][680]
    Entry:
    Connection = 0x26c2a30
    Rec Number = 1
    SQLState = 0x7fff128461c0
    Native = 0x7fff12845fb4
    Message Text = 0x7fff12845fc0
    Buffer Length = 510
    Text Len Ptr = 0x7fff12846210
    [ODBC][31473][1339581606.491395][SQLGetDiagRec.c][717]
    Exit:[SQL_SUCCESS]
    SQLState = 08001
    Native = 0x7fff12845fb4 -> 0
    Message Text = [[unixODBC][FreeTDS][SQL Server]Unable to connect to data source]
    [ODBC][31473][1339581606.491442][SQLGetDiagRec.c][680]
    Entry:
    Connection = 0x26c2a30
    Rec Number = 2
    SQLState = 0x7fff128461c0
    Native = 0x7fff12845fb4
    Message Text = 0x7fff12845fc0
    Buffer Length = 510
    Text Len Ptr = 0x7fff12846210
    [ODBC][31473][1339581606.491493][SQLGetDiagRec.c][717]
    Exit:[SQL_SUCCESS]
    SQLState = 01000
    Native = 0x7fff12845fb4 -> 20013
    Message Text = [[unixODBC][FreeTDS][SQL Server]Unknown host machine name.]
    [ODBC][31473][1339581606.491528][SQLGetDiagRec.c][680]
    Entry:
    Connection = 0x26c2a30
    Rec Number = 3
    SQLState = 0x7fff128461c0
    Native = 0x7fff12845fb4
    Message Text = 0x7fff12845fc0
    Buffer Length = 510
    Text Len Ptr = 0x7fff12846210
    [ODBC][31473][1339581606.491558][SQLGetDiagRec.c][717]
    Exit:[SQL_NO_DATA]
    [ODBC][31473][1339581606.491623][SQLDisconnect.c][208]
    Entry:
    Connection = 0x26c2a30
    [ODBC][31473][1339581606.491652][SQLDisconnect.c][237]Error: 08003
    [ODBC][31473][1339581606.491719][SQLFreeHandle.c][284]
    Entry:
    Handle Type = 2
    Input Handle = 0x26c2a30
    [ODBC][31473][1339581606.491750][SQLFreeHandle.c][333]
    Exit:[SQL_SUCCESS]
    [ODBC][31473][1339581606.493083][SQLFreeHandle.c][219]
    Entry:
    Handle Type = 1
    Input Handle = 0x26abfe0
    I can also ping both the hostname and ip address of the MSSQL server.

  • 10.6.2 update causes all non-Apple applications to crash.

    update Final Cut Studio 3 and Aperture fail to open as well (pro apps)
    Hi: Hoping for some expert help here.
    Updated from 10.6.1 to 10.6.2 today, using the combo update. The Software Update version refused to install.
    After a reboot, I was *unable to open any non-Apple branded application*. Firefox, Adobe CS3 or CS4, Roxio, MS Office, you name it. All Apple branded apps, iLife, iWork, Mail, Safari, all performed well.
    I created a new admin account, hoping that those problems would not occur - but they did.
    I have since done a Time Machine backup and am ready to do a clean install to rid myself of this mess.
    Before I do, I'm wondering if there's an easy fix out there.
    I did do a repair permissions before and after the install. I have removed Application Support files from the Library (both in the root and my profile).
    Am now angry and frustrated, but this isn't the first time Apple has let me down with a poor update package.
    Any suggestions/help would be appreciated.
    Thanks.
    Message was edited by: Prof. Van Nostrum

    donv (The Ghost) wrote:
    I am still happily at 10.6 because .6.1 broke things for some and offered me nothing I needed. Looks like .6.2 combo will be the same. I have no incentive to install it.
    Keep in mind:
    1. There is no evidence that the Snow Leopard updates have broken anything for most users -- because of the focus of forums like this one on problems, they are not reliable indicators of typical results.
    2. Among the reports of Snow Leopard update problems, many if not most are caused by unidentified pre-exsting conditions, not the updates themselves. These things should be fixed whether or not you update the OS because they usually get worse over time & eventually can't be ignored.
    3. For almost all users there are very strong incentives to install the updates. Among them:
    • Eliminating bugs that have been identified in the original release of the OS, typically ones that occur under relatively rare conditions. Just because one has not affected you yet does not mean it won't in the future.
    • Adding small & sometimes not-so-small improvements in the reliability, efficiency, & optimization of applications & application services. Every OS release is subject to deadlines; there are always some things that could be improved given more time to work on them. Software updates give the engineers the time they need to add these tweaks to the system.
    • Eliminating potential security exploits that have been identified in earlier releases of the OS. The OS is pretty secure to begin with; however, it is not so secure that users should become complacent about this -- especially since many of the flaws are well documented, meaning those with malicious intent can easily learn how to exploit them. This has been a historic problem with Windows: so many users of that OS are so lax about applying security updates that Windows malware that could not exploit the updated OS still regularly spread across the Internet & affect tens of millions of users. This includes Mac users because the malware often turns the vulnerable computers into "zombies" used to create botnets to do things like generate spam or launch denial of service attacks.
    There is little indication that this kind of malware currently targets Macs -- Windows computers are still the low hanging fruit most malefactors target -- but continued complacency among Mac users about this issue plus the increasing market share of Macs do not bode well for the future.
    Simply put, all OS's should be considered ongoing works in need of improvement. Apple does not make public all the fixes or improvements any OS update provides, so unless you try them you won't know how significant they will be for you. Caution is good -- always have a backup/reversion strategy should something go wrong -- but the old adage "if it ain't broke don't fix it" does not apply -- the OS is always broken so it is really a matter of how broken it is, not if it is or isn't.

  • Report RFUMSV25

    Dear All,
    Reading the documentation of this program, we can read : "Partial payments can only be posted for one specific invoice at a time. If a partial payment affects several invoices, a separate document has to be posted for each invoice."
    What does it mean exactly ?
    Example : I receive a payment 1000.00, this payment is a partial payment of invoice X for 700.00 and a partial payment for invoice Y for 300.00.
    I use FEBAN to book my bank statement . FEBAN is customized to have the double booking (Bank/Bank sub-account) and (Sub-account/customer). In this case, to book against customer, I must do it manually, and here, I book directly 1000 bank and 700.00 payment for X, 300 payment for Y.
    Does the documentation means that I mustn't book like this ? If I want to use FEBAN, I must book first 1000.00 against customers but without reference to the invoice, and then manually split this 1000.00 to a payment for invoice X in one document and a payment for invoice Y in an other one.
    Same kind of question regarding this remark on the documentation : "Each payment transaction has to be posted separately, that is, several incoming or outgoing payments cannot be posted using just one document".
    We can imagine, that we have one payment of 1000000.00 for the payment of invoices X,Y,Z. Does it means that we must book this payment in 3 or 4 different documents, as for partial payment ?
    Today I have a problem with my tax declaration - the base amount of vat is wrong,and I'm afraid to have strange values in my deffered tax account. I suppose that it comes from how we book our payment. But if I really well-understood, the process seems very complicated, and long.
    Could you confirm please.
    Thanks a lot
    Nathalie

    You would basically have to maintain the tax payable account for account key UMS in customizing transaction OB89. The menu path is as below
    Financial Accounting (FI) under General Ledger Accounting->Business Transactions ->Report-> Sales/Purchases Tax Returns
    Thanks and regards
    Kedar

  • Incorrect PDF output after Bursting an Oracle 9i Report

    I have created a report for distribution. It is a single query based on several tables. I have 3 repeating frames -1 for the Budget Holder, then the cost centres for the Budget Holder, and then the nominal account for each of the cost centres.
    I removed the repeating frame for the Budget Holder and in the Main Section of the report set the "Repeat On" field to Budget Holder.
    The XML that I am using is :-
    <!-- Send a mail with an attachment for each budget manager -->
    <destinations>
    <foreach>
    <mail id="BudMon1"
    to="[email protected]"
    subject="Budget Monitoring Position Statement for &amp;&lt;TRIM_SUBSTR_BMBALSUM_BUDGET_HO&gt;">
    <body srcType="text">Attached are your current periods Budget Monitoring Figures.</body>
    <attach format="pdf" name="rep_&amp;&lt;BUDGET_HOLDER&gt;.pdf" srcType="report" instance="this">
    <include src="mainSection"/>
    </attach>
    </mail>
    </foreach>
    </destinations>
    The distribution is working correctly as far as creating the correct number of files and attaching the files to the individual emails.
    The problem is the content of the PDF report. For example if there are 2 Budget Holders and I run this report for each Budget Holder separately. It would generate 2 PDF's Budget Holder A has a 5 page report and Budget Holder B has a 2 page report.
    If I run the report for a range of Budget Holders and attempt to distribute the report it correctly generates 2 PDF's files with their ID's as part of the filename.
    In the file named A.pdf the resultant pdf is 6 pages - pages 1-5 of A's data and then the second page of Budget Holder B's data (page 2 of 2).
    In the file named B.pdf the resultant pdf is 6 pages - pages 2-5 of A's data is first followed by the 2 pages of Budget Holder B's data.
    It does not matter how many Budget Holders are in the range, the result is always the same. In each report there is all Budget Holders page 2 to ??? of data, except for the actual Budget Holders (M) whose report is being created, where all of their pages are included.
    If the other Budget Holders excluding (M) have only page 1 of 1, then this is not included in the report for M.
    Has anyone else experienced this problem before?
    Thanks
    Michael

    Hi Jennifer,
    The errors you are getting FRM 41214 and 41217 according to documentation means the following :
    FRM 41214 : Unable to run report.
    Cause: The report server was unable to run the specified report.
    Action: Check the Report server and make sure it is up and running.
    FRM 41217 : Unable to get report job status.
    Cause: There is a problem getting report status for a given report job.
    Action: Check the Report server and make sure that the specified job exists.
    The report itself is not run. Also it couldn't get report job status. Can you retry the operation after re-starting the server and also ensuring the report is run to success before firing the no result report.
    Thanks,
    Vinod.

  • Oracle 9i/10g - Parses/Invalidations

    Hello guys,
    i have seen something weird in oracle 10g, while parsing and validating the sql statements... maybe you can explain...
    So the following situation with my statements:
    alter system flush shared_pool;
    drop table a;
    create table a as select * from dba_objects;
    create index i1#a on a(object_id);
    exec dbms_stats.gather_table_stats(null,'a');
    variable v number
    begin
    :v := 1000;
    end;
    prompt first
    select sum(data_object_id) from a where object_id=:v;
    select loads, invalidations, parse_calls, object_status, plan_hash_value
    from v$sql where sql_text='select sum(data_object_id) from a where object_id=:v';
    prompt changes
    update a set object_id=1000;
    commit;
    exec dbms_stats.gather_table_stats(null,'a');
    select loads, invalidations, parse_calls, object_status, plan_hash_value
    from v$sql where sql_text='select sum(data_object_id) from a where object_id=:v';
    prompt second
    select sum(data_object_id) from a where object_id=:v;
    select loads, invalidations, parse_calls, object_status, plan_hash_value
    from v$sql where sql_text='select sum(data_object_id) from a where object_id=:v';
    exit;
    And now the outputs of the oracle 9i and 10g..
    9i:
    first
    LOADS INVALIDATIONS PARSE_CALLS OBJECT_STATUS PLAN_HASH_VALUE
    1 0 1 VALID 3844206385
    changes
    LOADS INVALIDATIONS PARSE_CALLS OBJECT_STATUS PLAN_HASH_VALUE
    1 1 0 0
    second
    LOADS INVALIDATIONS PARSE_CALLS OBJECT_STATUS PLAN_HASH_VALUE
    2 1 1 VALID 3918351354
    Setting new stats on the table is setting parse_calls to 0 and invalidations to 1. At the next execution of the statement, it is reparsed and a new exectuion plan( hash value 3918351354) is created.
    Ok under 9i it is clear... but now have a look at 10g (with exact the same statements)
    10g:
    LOADS INVALIDATIONS PARSE_CALLS OBJECT_STATUS PLAN_HASH_VALUE
    1 0 1 VALID 3844206385
    changes
    LOADS INVALIDATIONS PARSE_CALLS OBJECT_STATUS PLAN_HASH_VALUE
    1 0 1 VALID 3844206385
    second
    LOADS INVALIDATIONS PARSE_CALLS OBJECT_STATUS PLAN_HASH_VALUE
    1 0 2 VALID 3844206385
    The execution plan is all the time valid and a new one is not created (see hash_value). Only a new parse occurs. But the execution plan changed after the creating the new "stats" (from index range scan to table access full)
    Can you explain this... why the hash_value of the execution plan is the same, although it is a new plan?
    Thanks and regards
    Stefan

    I recall a Jonathan Lewis blog about DBMS_STATS changing default values significantly in 10g.
    If you compare the 9.2 declaration of the no_invalidate parameter...
    no_invalidate boolean default FALSE...with that in 10g...
    no_invalidate boolean default to_no_invalidate_type (get_param ('NO_INVALIDATE'))...they initially appear similiar - you would think that 'NO_INVALIDATE' means 'FALSE', right? Wrong...
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    SQL> SELECT dbms_stats.get_param ('NO_INVALIDATE')
      2  FROM   DUAL;
    DBMS_STATS.GET_PARAM('NO_INVALIDATE')
    DBMS_STATS.AUTO_INVALIDATE
    SQL>...which means that the default behavior in 10g is AUTO_INVALIDATE, which bizarre double negative, according to documentation, means 'to have Oracle decide when to invalidate dependent cursors', which apparently means Oracle won't typically invalidate every dependent cursor, whereas in 9i it did, with potential performance problems.
    Exactly how this is related to your issue I am not sure, but it seems likely you can return to the previous behaviour if so desired by using no_invalidate = FALSE, or setting this parameter value with SET_PARAM procedure.

  • Custom toJSON() method is not called for each property

    I'm getting familiar with JSON in ActionScript 3 (FP 11), and experimenting with custom toJSON methods. The Developer's Guide section on Native JSON Support clearly states:
    JSON.stringify() calls toJSON(), if it exists, for each public property that it encounters during its traversal of an object. A property consists of a key-value pair. When stringify() calls toJSON(), it passes in the key, k, of the property that it is currently examining. A typical toJSON() implementation evaluates each property name and returns the desired encoding of its value.
    However, both the subsequent examples on the very same page and my own tests indicate that, on the contrary, toJSON() is only called once for the whole object, and no argument is passed for k. Take the following trivial class:
    public class JSONTest {
        public var firstProperty:int = 1;
        public var secondProperty:String = "Hello world";
        public function toJSON(k:String):* {
            trace("Calling toJSON on key", k);
            return this[k].toString();
    According to the documentation this should cause the following to print:
    Calling toJSON on key firstProperty
    Calling toJSON on key secondProperty
    And the output should basically be the same as without a custom toJSON method:
    { "firstProperty" : 1, "secondProperty" : "Hello world" }
    Instead, this just throws an error because nothing at all is passed as k.
    So, is the documentation wrong and the engine is not supposed to call toJSON for every property, or is the engine incorrectly only calling it once?

    I just figured out what the documentation means. k refers to the name of a property to which the entire instance being stringified is assigned, not to the names of the properties of the instance being stringified. An example is much clearer:
    var wrapper:Object = { prop1 : 1, prop2 : new JSONTest() };
    trace("Stringify result =", JSON.stringify(wrapper));
    Prints:
    k:String = prop2
    Stringify result = {"prop2":{"firstProperty":1,"secondProperty":"Hello world"},"prop1":1}
    So k is the name of the property on wrapper, not on the JSONTest instance. When stringify is called directly on an instance of JSONTest, k is empty because the JSONTest instance is not assigned as a property of another object, it is just all by itself. Either way the value returned by toJSON must be a string or object representing the stringified form of the entire instance.

  • Best practices for IPMP and LDoms?

    Having read the Oracle VM Server for SPARC 2.0 Administration Guide, it seems to imply that it might be possible to configure IPMP in the control domain (i.e. between the virtual switch interfaces), eliminating the necessity to configure IPMP on each individual guest domain. Specifically, it says:
    Configuring and Using IPMP in the Service Domain
    IPMP can be configured in the service domain by configuring virtual switch interfaces into a group. The following diagram shows two virtual switch instances (vsw0 and vsw1) that are bound to two different physical devices. The two virtual switch interfaces can then be plumbed and configured into an IPMP group. In the event of a physical link failure, the virtual switch device that is bound to that physical device detects the link failure. Then, the virtual switch device sends notification of this link event to the IP layer in the service domain, which results in failover to the other virtual switch device in the IPMP group.
    Unfortunately, when I configure IPMP in this way -- that is, with vsw0 and vsw0 in an IPMP group -- it doesn't appear to do what I'm looking for. I can ping the service domain, but not the guest domains which rely on that virtual switch.
    So, this is my question: is it possible to configure IPMP in the service domain and eliminate the need to configure IPMP in the guest domain, and if so, how? Or is it always necessary to share both virtual switches with the guest domain and setup IPMP with the guest domain, as in the example in the documentation?
    Thanks,
    Patrick Narkinsky

    I'm not 100% sure, but I think that the documentation means that you can configure an IPMP group in the service domain for the vsw interfaces that are used by the service domain for its networking:
    The two virtual switch interfaces can then be plumbed and configured into an IPMP group. So for example there would be vsw0 and vsw1 that are plumbed and an IP address would be assigned to the group. This does not mean the vsws that you are using for guest domain traffic.
    If you would like to have a transparent multipathed networking (like MPxIO multipathing for SAN disks) for the virtual switches that provide networking for guest domains, I guess that you could use aggregates (see dladm command). The problem is or at least has been that the interfaces that are configured in aggregate must be connected to the same switch. Then there will be a SPOF. Not sure if modern network switch OS's have some kind of a capability to spread an aggregate (in Cisco world etherchannel) between two or more physical switches.
    So IMHO you have to configure the IPMP group in the guest domains.

  • BW 3.5 and BOBJ XI 3.1 SP3 Integration

    Hello. I am in a strange Situation...we are using BW 3.5 SPS 18, for Single Sign on and integration with BOBJ XI 3.1 SP3, Minimum required from BW server side BW 3.5 SPS 20.. what happens if we integrate BOBJ with BW with out applying SPS 19 and SPS 20? any iades? the reason for this is very soon we are planning to upgrade to BW 7.3 (may be after 12 to 18 months, and we don't want to spend any extra money on BW 3.5 system).. what do you guys recommend?

    If the Supported platforms documentation states a specific patch level this means that the product functionality has been tested with this patch level and if you run into trouble using this setup SAP will provide support and corrections for the problems that may arise.
    The fact that SPS 18 is not referenced in the official documentation means that either this setup was found to have issues during the pre-relase official tests of SAP or that this was not tested at all and if you do run into issues SAP's first suggestion will be to upgrade your SAP BW backend.
    Regards,
    Stratos

Maybe you are looking for