Different questions regarding FMS 3.5

Hi,
I'm a newby to FMS and I have several questions ;-).
I'm planning an livestream. To get as much viewer as possible my plan ist to stream with 3 different bitrates. This works great ;-). But within my tests a few problems/ideas came up...
1. FMS is not recording the stream. I used Auto-DVR as well as manual start stop. I also prefixed mp4: ... any ideas? Is it perhaps because it is not the "interactive" one?
2. Is there a _simple_ possibility to to enable something like "autoquality" (between the 3 streams) on the client side? So that the best solution for the local connection is chosen?
3. Is there a simple possibility to get viewerstats?
Thanks a lot for your help

Hi,
1. You can record the livestream on the server by creating a custom application and doing a server side record. Your main.asc would be something like this :
var pubCount = 0;
application.onConnect=function(clientObj){
          trace("          on connect ");
          return true;
application.onDisconnect=function(clientObj){
          trace("          on disconnect ");
application.onPublish = function(clientObj,streamObj){
  trace("          in application publish : " + streamObj.name);
  if(pubCount <= 1)
       streamObj.record("record");
  else
       streamObj.record("append");
  pubCount++;
application.onUnpublish = function(clientObj,streamObj){
trace("on unpublish : " + streamObj.type + ":" + streamObj.name);
streamObj.record(false);
2. You can do this by using dynamic streaming. You'll find more information here : http://www.adobe.com/devnet/adobe-media-server/articles/dynstream_advanced_pt1.html
3. You can get various viewer stats using admin APIs. Full reference of the APIs available here : http://help.adobe.com/en_US/FlashMediaServer/3.5_Server_Management_ASD/flashmediaserver_3. 5_administrationapi.pdf. getAppStats() may be particularly useful.
Hope this helps. Please let me know if you have any other queries.
Thanks,
Apurva

Similar Messages

  • Questions regarding creation of vendor in different purchase organisation

    Hi abap gurus .
    i have few questions regarding data transfers .
    1) while creating vendor , vendor is specific to company code and vendor can be present in different purchasing organisations within the same company code if the purchasing organisation is present at plant level .my client has vendor in different purchasing org. how the handle the above situatuion .
    2) i had few error records while uploading MM01 , how to download error records , i was using lsmw with predefined programmes .
    3) For few applications there are no predefined programmes , no i will have to chose either predefined BAPI or IDOCS . which is better to go with . i found that BAPI and IDOCS have same predefined structures , so what is the difference between both of them  .

    Hi,
    1. Create a BDC program with Pur orgn as a Parameter on the selection screen
        so run the same BDC program for different Put organisations so that the vendors
        are created in different Pur orgns.
    2. Check the Action Log in LSMW and see
    3.see the doc
    BAPI - BAPIs (Business Application Programming Interfaces) are the standard SAP interfaces. They play an important role in the technical integration and in the exchange of business data between SAP components, and between SAP and non-SAP components. BAPIs enable you to integrate these components and are therefore an important part of developing integration scenarios where multiple components are connected to each other, either on a local network or on the Internet.
    BAPIs allow integration at the business level, not the technical level. This provides for greater stability of the linkage and independence from the underlying communication technology.
    LSMW- No ABAP effort are required for the SAP data migration. However, effort are required to map the data into the structure according to the pre-determined format as specified by the pre-written ABAP upload program of the LSMW.
    The Legacy System Migration Workbench (LSMW) is a tool recommended by SAP that you can use to transfer data once only or periodically from legacy systems into an R/3 System.
    More and more medium-sized firms are implementing SAP solutions, and many of them have their legacy data in desktop programs. In this case, the data is exported in a format that can be read by PC spreadsheet systems. As a result, the data transfer is mere child's play: Simply enter the field names in the first line of the table, and the LSM Workbench's import routine automatically generates the input file for your conversion program.
    The LSM Workbench lets you check the data for migration against the current settings of your customizing. The check is performed after the data migration, but before the update in your database.
    So although it was designed for uploading of legacy data it is not restricted to this use.
    We use it for mass changes, i.e. uploading new/replacement data and it is great, but there are limits on its functionality, depending on the complexity of the transaction you are trying to replicate.
    The SAP transaction code is 'LSMW' for SAP version 4.6x.
    Check your procedure using this Links.
    BAPI with LSMW
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI
    For document on using BAPI with LSMW, I suggest you to visit:
    http://www.****************/Tutorials/LSMW/BAPIinLSMW/BL1.htm
    http://esnips.com/doc/1cd73c19-4263-42a4-9d6f-ac5487b0ebcb/LSMW-with-Idocs.ppt
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI.ppt
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • Question regarding IWDTree and context Value Node naming

    Hi,
    I have a question regarding the IWDTree / IWDTreeNodeType components.
    I have a context looking like this:
    Context
      + ResponseNode
        + PersonNode (1..1)
          + PersonAddressNode                    (empty node, placeholder)
          | + AdresNode (0..n)
          + PersonChildNode                      (empty node, placeholder)
          | + PersonNode (0..n)
          |   + PersonAddressNode                (empty node, placeholder)
          |     + AddressNode (0..n)
          + PersonParentsNode                    (empty node, placeholder)
            + PersonNode (0..n)
              + PersonAddressNode                (empty node, placeholder)
                + AddressNode (0..n)
    The context represents a person, a person's address, and a person's children and parents with their respective addresses.
    As a result, on different branches, a PersonNode and AddressNode can appear.
    And for some strange reason, all PersonNodes and AddressNodes link to the same ResponseNode.PersonNode.PersonParentsNode.PersonNode and ResponseNode.PersonNode.PersonParentsNode.PersonNode.PersonAddressNode.AddressNode respectively, irregardless of their branch...
    Is it illegal to have multiple PersonNode and AddressNode node names, and should they be named uniquely?

    Generally, node names need to be unique inside the context, attributes in different nodes can have same names. I wonder if the context structure you described will result in code without compile errors.
    The WD Tree can only be used with recursive context nodes or with a hierarchy of non-singleton child nodes.
    Can you give an example how your tree should look like at runtime?

  • Question Regarding Mesh with 3702 and non AC ap´s

    Hello! 
    quick question regarding MESH deployments with 2 different sorts of AP´s: AC and non-AC modells: If my 3702i is my root AP´s, and 3602i my MAP - will AC still work in 80Mhz, or will I have to switch to 40mhz (and thus crippling (???) AC performance?) 
    Not 100% sure on this... I *think* it should still work for the normal 802.11n connection, but I´m not sure if the 80mhz channel width (needed??) for AC, will cause the non-ac 3602i to be stranded? 
    Thanks alot for your insight! 

    Currently, my network DHCP server is a software based DHCP server. In reading over your post if I understood correctly it sounds like the managed switch would have its own hardware based DHCP server to assign IP addresses to those clients identified on the "external" VLAN. Did I understand that correctly or did misread something?
    DHCP server will be software based, even though you defined it on your switch, it is DHCP service running on its OS.
    I am configuring this setup for a small business application and will need to purchase a managed switch with 16 or 24 ports. Do you have any recommendations on a particular managed switch that will handle the VLAN configuration and include POE while keeping costs in mind.
    In this forum, most of us discussed about Cisco enterprise grade wireless. Here is 2960X series switch detail, if you are interested
    http://www.cisco.com/c/en/us/products/switches/catalyst-2960-x-series-switches/index.html
    You may need to check the pricing with your Cisco account manager or from a Cisco partner.
    HTH
    Rasika
    **** Pls rate all useful responses ****

  • Question regarding Inheritance.Please HELP

    A question regarding Inheritance
    Look at the following code:
    class Tree{}
    class Pine extends Tree{}
    class Oak extends Tree{}
    public class Forest{
    public static void main(String args[]){
      Tree tree = new Pine();
      if( tree instanceof Pine )
      System.out.println( "Pine" );
      if( tree instanceof Tree )
      System.out.println( "Tree" );
      if( tree instanceof Oak )
      System.out.println( "Oak" );
      else System.out.println( "Oops" );
    }If I run this,I get the output of
    Pine
    Oak
    Oops
    My question is:
    How can Tree be an instance of Pine.? Instead Pine is an instance of Tree isnt it?

    The "instanceof" operator checks whether an object is an instance of a class. The object you have is an instance of the class Pine because you created it with "new Pine()," and "instanceof" only confirms this fact: "yes, it's a pine."
    If you changed "new Pine()" to "new Tree()" or "new Oak()" you would get different output because then the object you create is not an instance of Pine anymore.
    If you wonder about the variable type, it doesn't matter, you could have written "Object tree = new Pine()" and get the same result.

  • Question regarding Inrefaces:Please GUIDE.

    A question regarding INTERFACES.
    'Each interface definition constitutes a new type.
    As a result, a reference to any object instantiated from any class
    that implements a given interface can be treated as the type of
    the interface'.
    So :
    interface I{}
    class A implements I{
    }Now,class A is of type I.Right?
    Now,if class A implements more than one interface,then what
    is the actual type of A?
    For example:
    interface I{}
    interface R{}
    class B implements I,R{
    }What is now B's type? I or R? or both?

    >
    The class (that implements the interface) actually
    defines the behavior, and the interface just serves as
    a contract for that behaviorYes.
    - a view.Call it that if you want, but it being "a view" doesn't take away is-a-ness.
    IMHO, the 'types' are the classes, which qualify for
    the 'is a' relationshipAs yawmark points out, your use of "type" is not consistent with the JLS. Regardless of how you want to define type, the face it that it makes sense to say "A LinkedList is a List" and "A String is (a) Comparable" etc. Additionally, the way I've always seen the is-a relationship described, and the way that makes the most sense to me, is that "A is-a B" means "A can be used where B is expected." In this respect, superclasses and implemented interfaces are no different.
    (which is what the words "extends" and "implements"
    strongly suggest)"Foo extends Bar" in plain English doesn't suggest to me that Foo is a Bar, but quite clearly, in the context of Java's OO model, it means precisely that.
    "Foo implements Bar" in plain English doesn't suggest much to me. Maybe that Foo provides the implementation specified in Bar, and therefore can be used where a Bar is required, which is exactly what implements means in Java and which, as far as I can tell, is the core of what the is-a relationship is supposed to be about in general OO.

  • Questions regarding PO output in SRM 4.0

    Hi All
    I have several questions regarding the settings for PO output in SRM 4.0 ( Ext. classic)
    Would really appreciate if someone provides me the rationale and business reasons behind some config settings:
    I am referring to BBP_PO_ACTION_DEF transaction in IMG
    1) What is the difference between Processing when saving document & immediate processing. SHouldn't the PO output be always processed after the PO is changed and saved?
    2) In the Determination technology what is the significance of the term ' Transportable conditions'. Why would it be different if the conditions were manual and not transportable?
    3) In the Rule type what is the signifcance of Workflow conditions? How workflow controls the output?
    4) What is the meaning of ' Action merging' in layman's terms and what do each of the choices like "Max. 1 Unprocessed Action for Each Processing Type" signify.
    5) What powers do the 'changeable in dialog' & 'executable in dialog' indicators assign to the user processing the PO. What happens if these indicators are not set.
    6) What does ' Archive mode' in processing type signify. Are the PO outputs archived and stored?
    Regards
    Kedar

    Hello,
    Have a look at note 564826. It has some information.
    As far as I know, processing time 'Immediate Processing' is not allowed. I really don't know the reason.
    "Processing Time" should be defined as "Process using Selection Report" when an output should be processed by a report, such as RSPPFPROCESS, for example.
    If you set this as "Process when Saving document", then the output will be sent immediately, otherwise you have to process it with transaction BBP_PPF.              
    I hope I could help you a little.
    Kind regards,
    Ricardo

  • My questions regards Camera Raw

    I am posting this to both the PhotoShop and Lightroom forums. I use PhotoShop CS4 and Lightroom 1.1 on a Windows XP OS.
    My question regards Camera Raw.
    Question 1: If I open a RAW file in PhotoShop will the properties remain if I later open it in Lightroom, and vise-versa. Previously I had used Canon’s DPP and once I opened files in PhotoShop it was at square one. I concluded this was due to different manufacturers. I can adjust to that. Now that I am staying with just Adobe products I want to make sure I don’t have to do everything twice.
    Question 2: Using Bridge I have opened some jpg’s in Camera Raw and manipulated them. It seemed to work better if I clicked “Done” rather than Save Image (if I need to save it in another directory).
    Question 3: If I am working in PhotoShop and would either like to open a Raw file or manipulate a jpg in Camera Raw, is it possible to open Camera Raw without having to use Bridge?
    Thanks in advance for any assistance.

    rollsnut wrote:
    I am posting this to both the PhotoShop and Lightroom forums.
    You would have been better off asking on the Camera Raw Forum, not Photoshop (since you posted it only to the Windows side).
    As for your questions, you can indeed coordinate settings to and from Camera Raw and Lightroom but it's a matter of understanding how metadata editing works...In Camera Raw/Bridge, settings are saved in the file or in a side car file...in Lightroom settings are saved in the Lightroom catalog database, not the file or side car UNLESS you specifically instruct Lightroom to read or write the settings to or from the file.
    So, what Camera Raw does to a file will need to be read FROM the file inside of Lightroom. Just make no mistake that once the image in actually processed and opened inside of Photoshop, it's no longer a raw file but a processed file and anything you do to it afterwards in Photoshop will not be in the original raw file.
    The other questions aren't really Lightroom questions and would be better of posted in the Camera Raw forum...

  • Bw Question :regarding the versioning

    Hii All,
    I did post a question regarding to the versionining of the cube on friday 9th May and still i did not get any reply on that.Plz let me know.otherwise plz keep me posted that you are unable to reply to my question.
    My Question was :
    In the Versioning of the cube ,we give that version a particular name and select the value type of it as : 110 or 130 or 140. what is this value type .. ? what does this 110,130 or 140 really mean ?
    Why we need this value type.? and can we get some documents to read and explore this value type ?. Plz help.
    Thanks & regards ,
    Madhavi S Bichakal

    Hi Madhavi,
    Basically in BW you'll find two characteristics used for versioning:
    - Version: Used to create different versions of the information
    - Value type: used to indicate what the information means.
    Examples:
    Version 000 is usually Plan/Actual data (the final version). Then, for version 000 you will have different value types, like 010 = Actual, 020 = Plan, 030 = Target, etc..
    Then you can have different versions (001, 002, 003) that are used in the planning process. You start with version 001, then you can move to 002, 003,... and when you have the final Plan, you move to 000.
    That's the usual usage of version / value type.
    but, you can use it as you want. The only problem that you can have is that if you rename the description of a value type, and then you activate a BCT that generates data for that value type and the description will be incorrect.
    From what you said, you are using values from 100 and above, SAP uses up to 90 from what i've seen, so you won't have any problems.
    Hope this clarifies.
    Regards,
    Diego

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Question regarding mic plugged into audio in/optical digital audio in port

    So I have this external headset... headphone/mic set combo... I plug the headphone jack in the headphone slot and the mic jack in the audio in/optical digital audio in port slot in the back of my IMAC. I go to my system preferences select "sound" and "line in - Audio line in port". But when recording, it's still recording from the internal mic? How do I just record just from the external mic??? What am I doing wrong? I've read all the other questions regarding how to get the external mics to work, but it's still recording from the internal mic? Help?
      Mac OS X (10.4.9)  

    I never expected APPLE wouldn't make a mac that didn't have a proper mic jack?
    It has a proper one, a good one, a professional one, not the toy that comes with most PCs. Apple has used line level analog audio input for years.
    From Wikipedia;Line level is a term used to denote the strength of an audio signal used to transmit analog sound information between audio components such as CD and DVD players, TVs, audio amplifiers, and mixing consoles.
    In contrast to line level, there are weaker audio signals, such as those from microphones and instrument pickups, and stronger signals, such as those used to drive headphones and loudspeakers. The strength of the various signals does not necessarily correlate with the output voltage of a device; it also depends on the source's output impedance, or the amount of current available to drive different loads.

  • Question regarding the installation of a J2EE 6.40 Add-in

    Hi all,
    I would like to install a J2EE engine on a test instance of ECC 5.0 and have a few questions regarding the installation...
    Do I have to use the MASTER CD to first install the J2EE engine (Support Package 0) and then apply the latest support packages found on the SAP Marketplace?
    Or should be able to directly install the J2EE Add-In by using the latest support packages found on the SAP Marketplace?
    Best regards,
    Xavier Vermaut

    Thanks Bhavik for your reply,
    That's what I actually thought but I get the following problem... Here's what I wrote into my customer message... I am still waiting for an answer and would like to get this solved ASAP
    Dear SAP,
    We would like to install the J2EE 6.40 Add-In on our ECC 5.0 instance
    (TST) but get the following error message at the very beginning of the
    installation
    > Cannot find an installed ABAP system, which is a prerequisite for a
    > J2EE Add-In installation. The installation cannot continue.
    We checked the installation logs (sapinst_dev.log) and found the
    following :
    > Found these instances:
    > sid: MGR, number: 00, name: DVEBMGS00, host: erpqs1a
    > sid: TST, number: 10, name: DVEBMGS10, host: erpqs1a
    Why does the installation say that it can not find any ABAP systems when
    having previously found the 2 different instances running on this
    server?
    Would this problem be related to the fact that we have two instances on
    this server?
    Please find hereunder the way we performed this installation :
    01) Download of the 4 different parts of SAP J2EE Engine 6.40 SP 10
         (Solaris 10 - Oracle)
         Part I   : SAPINST10_0-20000121.SAR         (Solaris 64)
         Part II  : CTRLORA10_0-20000121.SAR         (Solaris 64)
         Part III : J2EERTOS10_0-20000121.SAR        (Solaris 64)
         Part IV  : J2EERT10_0-10001982.SAR          (OS Independant)
    02) Extract these 4 archives into /install/J2EE_640
    03) Check Java Version and Environment Variables
    04) Check Solaris Pre-Requisites
    05) Adapt "product.xml" as specified in OSS Note 697535 (IGS)
    06) Log in as 'root'
    07) Set DISPLAY environment Variable
    08) Move to the Installation directory
          ( /install/J2EE_640/SAPINST-CD/SAPINST/UNIX/SUNOS_64 )
    09) ./sapinst
    10) In the 'Welcome to Netweaver Installation' screen, select
          => Dialog Instance Finalization
    Any idea how to get this solved?
    Best regards,
    Xavier Vermaut
    Message was edited by: Xavier Vermaut

  • Questions regarding creating the database

    Hi there,
    From the previous posting, http://forum.java.sun.com/thread.jspa?threadID=640415&tstart=15 someone gave me the "formula" of connecting to the database:
    java.sql.Connection  conn   =  java.sql.DriverManager.getConnection("jdbc:mysql://localhost/name_of_DB","user","password") Now just couple of questions regarding the formula :
    1) Obviously, if I want the name of my DB, then I will have to create my DB. Can somebody please tell me the protocol of creating the DB? And where do I create this DB (i.e can I create it anywhere in my application)? Or is it that I have to create a new database using MySQL itself?
    2) After creating a database, I would like to create multiple tables containing different datas. Is it possible to place the code creating these tables anywher in the application I want?
    Your ideas or advice would be much appreciated. Thank you in advance.
    Regards,
    Young

    1) Yes, you'll have to create the database using MySQL.
    2) You sure can once you have the database created with the proper rights assigned to your user. You can put the code anywhere you want but you may want to put it somewhere where it only ran once like on install if you're doing a standalone app.

  • Questions regarding upgrade from 4th gen to 5th gen iPod

    I recently received a new 80 gig 5th gen iPod and had 2 questions regarding it and my old 40 gig 4th gen. PC meets minimun requirements and is running XP Pro.
    1) Do I need to do anything before synching the new 5th gen iPod with my existing Library (i.e. uninstall iPod Updater for 4th gen)?
    2) If I don't need to uninstall the 4th gen Updater, can I run both iPods off the same Library, same user (obviously not simultaneously)?
    tia - joggy
    N/A   Windows XP Pro  

    Hey, joggy!
    1) No, I don't believe you do. Be sure to disconnect the 4th gen iPod before connecting the 5th generation iPod to the computer.
    2) Yes, you can.
    There are basically two methods with managing muleiple iPods on one computer:
    Method 1 - Create different Windows users accounts for each resgistered iPod on this computer.
    Method 2 - Create a playlist in iTunes for each iPod.
    To make Method 2 work, connect one of your iPods, and click on it in the left-source panel.
    Under the "Music" tab, and set your option for a specific playlist(s) under the "Sync Music" option.
    Do the same with your other iPod; not connected at the same time, though.
    For more details on this matter, check out Apple's Support article about it:
    How to manage multiple iPods using one computer
    I hope that helps you.
    -Kylene

  • 3 questions regarding duplicate script

    3 questions regarding duplicate script
    Here is my script for copying folders from one Mac to another Mac via Ethernet:
    (This is not meant as a backup, just to automatically distribute files to the other Mac.
    For backup I'm using Time Machine.)
    cop2drop("Macintosh HD:Users:home:Desktop", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Documents", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Pictures", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Sites", "zome's Public Folder:Drop Box:")
    on cop2drop(sourceFolder, destFolder)
    tell application "Finder"
    duplicate every file of folder sourceFolder to folder destFolder
    duplicate every folder of folder sourceFolder to folder destFolder
    end tell
    end cop2drop
    1. One problem I haven't sorted out yet: How can I modify this script so that
    all source folders (incl. their files and sub-folders) get copied
    as correspondent destination folders (same names) under the Drop Box?
    (At the moment the files and sub-folder arrive directly in the Drop Box
    and mix with the other destination files and sub-folders.)
    2. Everytime before a duplicate starts, I have to confirm this message:
    "You can put items into "Drop Box", but you won't be able to see them. Do you want to continue?"
    How can I avoid or override this message? (This script shall run in the night,
    when no one is near the computer to press OK again and again.)
    3. A few minutes after the script starts running I get:
    "AppleScript Error - Finder got an error: AppleEvent timed out."
    How can I stop this?
    Thanks in advance for your help!

    Hello
    In addition to what red_menace has said...
    1) I think you may still use System Events 'duplicate' command if you wish.
    Something like SCRIPT1a below. (Handler is modified so that it requires only one parameter.)
    *Note that the 'duplicate' command of Finder and System Events duplicates the source into the destination. E.g. A statement 'duplicate folder "A:B:C:" to folder "D:E:F:"' will result in the duplicated folder "D:E:F:C:".
    --SCRIPT1a
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    with timeout of 36000 seconds
    tell application "System Events"
    duplicate folder sourceFolder to folder destFolder
    end tell
    end timeout
    end cop2drop
    --END OF SCRIPT1a
    2) I don't know the said error -8068 thrown by Finder. It's likely a Finder's private error code which is not listed in any of public headers. And if it is Finder thing, you may or may not see different error, which would be more helpful, when using System Events to copy things into Public Folder. Also you may create a normal folder, e.g. named 'Duplicate' in Public Folder and use it as desination.
    3) If you use rsync(1) and want to preserve extended attributes, resource forks and ACLs, you need to use -E option. So at least 'rsync -aE' would be required. And I rememeber the looong thread failed to tame rsync for your backup project...
    4) As for how to get POSIX path of file/folder in AppleScript, there're different ways.
    Strictly speaking, POSIX path is a property of alias object. So the code to get POSIX path of a folder whose HFS path is 'Macintosh HD:Users:home:Documents:' would be :
    POSIX path of ("Macintosh HD:Users:home:Documents:" as alias)
    POSIX path of ("Macintosh HD:Users:home:Documents" as alias)
    --> /Users/home/Documents/
    The first one is the cleanest code because HFS path of directory is supposed to end with ":". The second one also works because 'as alias' coercion will detect whether the specified node is file or directory and return a proper alias object.
    And as for the code :
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    It is to strip the trailing '/' from POSIX path of directory and get '/Users/home/Documents', for example. I do this because in shell commands, trailing '/' of directory path is not required and indeed if it's present, it makes certain command behave differently.
    E.g.
    Provided /a/b/c and /d/e/f are both directory, cp /a/b/c /d/e/f will copy the source directory into the destination directory while cp /a/b/c/ /d/e/f will copy the contents of the source directory into the destination directory.
    The rsync(1) behaves in the same manner as cp(1) regarding the trailing '/' of source directory.
    The ditto(1) and cp(1) behave differently for the same arguments, i.e., ditto /a/b/c /d/e/f will copy the contents of the source directory into the destination directory.
    5) In case, here are revised versions of previous SCRIPT2 and SCRIPT3, which require only one parameter. It will also append any error output to file named 'crop2dropError.txt' on current user's desktop.
    *These commands with the current options will preserve extended attributes, resource forks and ACLs when run under 10.5 or later.
    --SCRIPT2a - using cp(1)
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    set dst to (destFolder as alias)'s POSIX Path's text 1 thru -2
    set sh to "cp -pR " & quoted form of src & " " & quoted form of dst
    do shell script (sh & " 2>>~/Desktop/cop2dropError.txt")
    end cop2drop
    --END OF SCRIPT2a
    --SCRIPT3a - using ditto(1)
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    set dst to (destFolder as alias)'s POSIX Path's text 1 thru -2
    set sh to "src=" & quoted form of src & ";dst=" & quoted form of dst & ¬
    ";ditto "${src}" "${dst}/${src##*/}""
    do shell script (sh & " 2>>~/Desktop/cop2dropError.txt")
    end cop2drop
    --END OF SCRIPT3a
    Good luck,
    H
    Message was edited by: Hiroto (fixed typo)

Maybe you are looking for

  • Macbook Pro-USB

    Why doesn't my macbook pro read my usb drive? I connected it and it read it the first time, after that it didn't want to read it anymore. What can I do?

  • Why is the first address on a subnet reserved?

    Based on subnetting, the "all 1s" or last address in a subnet is the broadcast address.  Why is the "all 0s" or first address reserved?  About all I can find on the web is that "way back in time" it was also a broadcast address of some type.  Is ther

  • Mac:Office Icons

    Personally I have never liked the Office for Mac icons. I'm lucky enough to be a beta tester for Office 2008 and well, they still haven't changed the icons. Its been 8 years already! Anyway, I was whining to buddy about it and he found me a download

  • Most recent version of LiveCycle Designer

    I've recently moved to Windows 7 and am having trouble being able to preview forms in LiveCycle. I'm also getting an error when I open the in Reader XI. I'm not sure if these errors are related to W7 or XI. I'm currently I running version 9.0.0.2.201

  • SQL Loader, data rejection problem

    Hi all , I am trying to load data using sql loader.The first field is a long and the rest all are varchar2 and date. After the long datatype is loaded the rest of the data is rejected. What are the possible causes of data rejection by sql loader? I a