Trigger alram delay on STM links.

Hi Team,
Problem description:
 Configuring delay trigger alram on STM Sonet controller to reduce Multiple flaps observed in STM1 links.
Root cause shared by reliance:
it was found that BB links are getting affected even in single fiber cut where the switching time is <50msec.On ring
This is mainly due to path alarms (Path AIS,  RDI, LOP etc) that are appearing at the end interface .
Path alarms raise during fiber cuts   & auto cleared after completion protection switching.
This is an expected behavior of any SDH/SONET convergence network & has no adverse effect on end user services.
Mitigation:
In order to mitigate these short time path alarms during network convergence we need to configure delay trigger on Sonet controller.
However we are not getting the command to configure the same.
Here logs attached and need further suggestion on this

See here http://forums.adobe.com/message/3447643#3447643
Gramps

Similar Messages

  • Logon trigger not working over DB-Link?

    Hi all,
    I have a serious question about accessing tables over a database link.
    I have three schema:
    DATA@SOURCE
    INTERFACE@SOURCE
    WORK@TARGET
    Schema DATA has one table called T1
    The INTERFACE schema has select privileges on all tables from DATA. Furthermore schema INTERFACE has a logon trigger to change the "current schema" to DATA:
    CREATE OR REPLACE TRIGGER TRG_A_LOGIN_SET_SCHEMA AFTER LOGON
    ON INTERFACE.SCHEMA
    BEGIN
    execute immediate 'ALTER SESSION SET CURRENT_SCHEMA = DATA';
    END;
    The WORK schema has a database link to the INTERFACE schema called INT_DB_LINK.
    I am now logged into schema WORK on the TARGET database and I am executing following statement:
    select a from T1@INT_DB_LINK
    -> it's working
    Next I execute
    declare
      cursor c is 
      select a
        from T1@INT_DB_LINK
       where rownum<2;
    begin
      for r in c loop
        null;
      end loop;
    end;
    This is not working. Error message is ORA-000942: table or view does not exist.
    But why?
    Can anyone help me?
    Thanks in advance
    Py

    Hi all,
    after a long, very long search I found what caused this strange behaviour.
    The ORA- Error was not raised by the SQL-Execution-Engine but by the SQL-Parser/SQL-Validation.
    As the second statement is an anonymous SQL block the Oracle Parser checks all objects dependencies before execution.
    This means a connection is established from TARGET to SOURCE checking if table T1 is available. The strange thing is
    that on this connection the "ALTER SESSION" trigger is not fired. So the parser does not find object T1 in schema INTERFACE.
    If I create an empty table T1 in INTERFACE the anonymous block gets parsed/validated and the statement is executed. But this
    time the block does a normal "connect session" and the trigger is fired. This means the statements accesses the T1 table in
    schema DATA. (But T1 in INTERFACE has to be existent that parse/validation works)
    I don't know if this is a bug or a feature.
    To workaround this I have created private synonyms in schema INTERFACE pointing to the objects in DATA.
    Thanks for your help!
    Py
    regarding the other qestion:
    Yes, permissions are granted over a role.

  • Delaying the command link action till user presses Yes/No in dialog

    Hi ,
    On a command link in an af:table i am trying to display and Yes/No Dialog box. On clicking the link, the action binded with the link gets executed first and then the dialog gets initialized. Can we some how deffer the link execution till the user presses Yes/NO?
    following is the code
    =======================================================================
    <af:column id="c13" headerText="Lock">
    <af:commandLink
    text="lockUser"
    disabled="#{!bindings.lockUser.enabled}"
    id="cl1"
    action="#{searchBean.lockUser}"
    partialSubmit="true"
    partialTriggers="cl1" >
    <af:showPopupBehavior popupId="::p1"
    triggerType="click"/>
    </af:commandLink>
    </af:column>
    =======================================================================
    <af:popup id="p1" partialTriggers="p1">
    <af:dialog id="d2" dialogListener="#{searchBean.dialogListener}"
    partialTriggers="d2" title="xcx" type="okCancel"/>
    </af:popup>
    =======================================================================
    public String lockUser() {
    BindingContainer bindings = getBindings();
    OperationBinding operationBinding = bindings.getOperationBinding("lockUser");
    Object result = operationBinding.execute();
    System.out.println("***111");
    if (!operationBinding.getErrors().isEmpty()) {
    return null;
    searchUser();
    return null;
    =======================================================================
    public void dialogListener(DialogEvent dialogEvent)
    String ss="HHH";
    System.out.println(ss);
    if (dialogEvent.getOutcome().equals(DialogEvent.Outcome.ok))
    lockUser();
    System.out.println("SAVE your work here");
    }else
    System.out.println("Do Not Do Any Thing");
    =======================================================================

    This should be possible, if you remove the action from the af:commandLink and execute the action in the dialoglistener
        public void dialogListener(DialogEvent dialogEvent)
            Outcome lOutcome = dialogEvent.getOutcome();
            if (lOutcome.equals(Outcome.ok))
                // do the navigation
                ControllerContext ccontext = ControllerContext.getInstance();
                //set the viewId – the name of the view activity to
                //go to - to display
                String viewId = "Emp";
                ccontext.getCurrentViewPort().setViewId(viewId);
                // or execute a method....
        }Timo

  • Trigger delay about 2 days

    Hey guys,
    my idea to start virtual machines in SCVMM which are offline and to perform a client action (via SCCM) to roll out the new updates our company released, only needs a lil piece in that puzzle. If the newest updates are deployed and installed, I want to shut
    down the virtual machines. The trigger delay, that you can set up in the "links" between the actions, can only set up to 999 seconds. 
    Do you have an idea to trigger a delay, that can shut down the virtual machines after 2 days, for example? I don´t know that much about scripting, maybe there is a easier way.
    Thanks for your help and best regards,
    Simon

    You can use a "run .net activity" and use PowerShell:
    Start-Sleep -seconds 172800 #pause execution for 2 days
    There should be a better way than to have a runbook sit idle for two days. You might look into triggering the runbook from an external scheduling application like Control-M or something similar.

  • Error 1074396154 received during delayed trigger.

    I am using a GigE camera to capture an image and then check for a hole using IMAQ Find Circular Edge.  Each time the camera receives a trigger the image is captured and the while loop processes and the results are displayed.  If the trigger is delayed for more than about 5 seconds, I get Error 1074396154 (IMAQ Spoke - Image not large enough for the operation). 
    My understanding was the while loop would not try to process until a new image was received.  Is there possibly a setting which controls the while loop operation?
    Regards,
    Doug

    The Oracle database doesn't support the returning clause for database views using instead-of triggers. ADF BC inherently includes the returning clause, if in a relating EO you have the refresh-after-insert or refresh-after-update property set on for an attribute.
    Hope this helps.
    CM.

  • Triggered AO with Delay

    Hi,
    I want to trigger a delayed analog output. I am using the DAQmx trigger property node to start the AO after a certain amount of time. Can anyone tell me, what is the shortest delay that can be used with the property node ? Is the timing done through the software or is it hardware controlled ? I am using a M series PCI-6229 DAQ card.
    i want to delay the AO by around 20 micro seconds from the digital trigger.
    thanks
    Colm 

    Hi Colm,
    the Start>>More>>Delay Units Property help says
    Specifies the units of Start>>More>>Delay.
    Sample Clock Periods (10286)
    Complete periods of the Sample Clock.
    Ticks (10304)
    Timebase ticks.
    Seconds (10364)
    Seconds.
    Start>>More>>Delay
    Specifies an amount of time to wait after the Start Trigger is received before acquiring or generating the first sample. This value is in the units you specify with Start>>More>>Delay Units.
    So depending either on the Sample clock you've got setup, or the Timebase you're using it is then hardware controller, but you must make sure that you've armed the trigger before it actually arrives.
    20uS on a timebase of 20MHz is a resolution 50ns, so a delay of 400.
    If you specify in seconds, then it will be calculated and linked to the master timebase.
    By deafult the delay units should be on Ticks
    Hope that helps
    Sacha Emery National Instruments (UK)
    Message Edited by SachaE on 03-27-2006 03:59 PM
    Message Edited by SachaE on 03-27-2006 04:00 PM
    // it takes almost no time to rate an answer

  • IMAQdx with JAI and a hardware trigger

    Hello,
    We are working with two 'JAI AD-080' cameras and IMAQdx, and have two problems regarding the triggering and frame grabbing:
    1)  We are unable to change the trigger source through IMAQdx property node or the Vision Acquisition Express; Vision Acquisition block.
    2)  When we manually edit the trigger source property using the NI Measurement and Automation Explorer (MAX) to its correct value, we can't get all four of the CCD's to run at a time without resulting in bad packets, e.g. horizontal black lines across the images.
    Our goal is to obtain the images from the 4 CCD's at a rate of 5 Hz using our hardware trigger.  We can already connect and obtain all four images at full speed, but the 5 Hz trigger is not being used by the cameras in that case.
    Details of the setup:
    NI 2011, Windows 7 
    Two (2) JAI AD-080 cameras (with 2 CCD's each), GigE cameras connected over Ethernet
    Hardware triggering at 5 Hz, on pin:  'Line 7 - TTL In 1'
    Details of the problem:
    (1)  Setting the trigger source not possible in Vision Express or IMAQdx property node
    In order to use our hardware trigger, we have to set the camera property 'CameraAttributes::AcquisitionControl::TriggerSour​ce' to a specific pin (Line 7 - TTL In 1).  This property is available in MAX, but is not usable in the Vision Express Block.  The property is present, but the values are invalid.  Here is what I think is happening:  the list of properties are read from the camera, but LabVIEW does not know what the valid values are for that property and it populates the value drop-down menu with whatever was loaded last.  This can be seen in figures 1 and 2 where the values in the drop down menu change.
    Similarly, this property of 'Trigger Source' cannot be changed programmatically using the IMAQdx property node shown here: http://digital.ni.com/public.nsf/allkb/E50864BB41B​54D1E8625730100535E88
    I have tried all numeric values from 0 to 255, and most give me a value out of range error, but the ones that do work result in no change to the camera.
    (2)  Lost packets in image during triggering
    If I set the 'Trigger Source' property in MAX to the correct pin, save the configuration, and then use the Vision Acquisition Express block to connect to the camera, the triggering works properly (the hardware trigger is used and the images are received by LabVIEW at 5 Hz).  However, this only works for one CCD:  If i use the same code for all four CCD's at the same time, I get black bars on the images, and at least one of the CCD's result in an error in 'IMAQdx Get Image.vi'  (code -1074360308, -1074360316)
    I tested this by using the configuration attributes created by the Vision Express Block, (The string used in the 'IMAQdx Read Attributes From String.vi'),  in the code we have been developing as well as a very simplified version which I have attached.  Those configuration attributes are saved in the text files:  JAI_Config_TrigON.txt and JAI_Config_TrigOFF.txt for using triggering or not respectively.  
    So my final questions are:
    Is there a problem with the IMAQdx because it doesn't recognize the trigger source value?
    Do you have any suggestions for why there are bad packets and trouble connecting to the cameras when I load them with the trigger on attributes?
    Thank you for your time - 
    Attachments:
    Fig1_VisionAcq.png ‏387 KB
    Fig2_VisionAcq.png ‏442 KB
    Fig3_BadPackets.png ‏501 KB

    Hello,
    Thank you for your response; especially the speed in which you responded and the level of detail.  
    I have not solved the problem fully in LabVIEW yet, but I was able remove the black lines and apparitions from the images using different camera parameters.  
    Since this was a significant victory I wanted to update:
    1)  Version of IMAQdx?
    I have IMAQdx 4.0, but the problem persists.
    2)  Setting configuration files
    Your suggestion to pay attention to the order in which the properties are set as well as using the MAX settings is very helpful.  I have not explored this feature fully, but I was able to successfully use MAX to set the default settings and then open the cameras programmatically without loading a new configuration file.  
    3)  Bandwidth limitations
    I modified the CCD's to only use 250 Mbits/second, but the lost packets (or missing lines/ apparitions) were still present.  
    4)  JAI AD-080GE Specifics
    I am using the JAI AD-080GE; and there are two settings for this camera that I want to mention:  
    JAI Acquisition Control>> Exposure Mode (JAI)>>Edge pre-select
    JAI Acquisition Control>> Exposure Mode (JAI)>>Delayed readout EPS trigger
    The "Edge pre-select" mode uses an external trigger to initiate the capture, and then the video signal is read out when the image is done being exposed.
    The "Delayed readout EPS trigger" can delay the transmission of a captured image in relation to the frame start.  It is recommended by JAI to prevent network congestion if there are several cameras triggered simultaneously on the same GigE interface.  The frame starts when the 'trigger 0' is pulsed, then stored on the camera, then is transmitted on 'trigger 1'.  
    The default selection is the "Delayed readout EPS trigger", however, I do not know how to set the 'trigger 1' properly yet and I only have one connection available on my embedded board that is handling the triggering right now (I don't know if 'trigger 1' needs to be on a separate line or not).  Incidentally, the system does not work on this setting and gives me the black lines (aka lost packets/ apparitions).
    I was able remove the black lines and apparitions using the "Edge pre-select" option on all 4 images with a 5 Hz simultaneous trigger.  I confirmed this using the "JAI Control Tool" that ships with the cameras.  I am unable to make this happen in MAX though, as the trigger mode is automatically switched to 'off' if I use the mode:  JAI Acquisition Control>> Exposure Mode (JAI)>>Edge pre-select
    i.e. when manually switching the trigger mode to 'on' in MAX, "JAI Acquisition Control>> Exposure Mode (JAI)>>Delayed readout EPS trigger" option is forced by MAX.  The vise-versa is also forced so that if EPS mode is chosen, "Trigger Mode Off" is forced.
    Additionally, there is a setting called:
    Image Format Control>>Sync Mode>>Sync     &     Image Format Control>>Sync Mode>>Async
    When the "Sync" option is chosen the NIR CCD uses the trigger of the VIS CCD.  In addition to using the "Edge pre-select" option, the "Sync" option improves the triggering results significantly.  
    5)  Future troubleshooting
    Since I cannot set the camera parameters manually in MAX (due to MAX forcing different combinations of parameters in 4), I am going to explore manually editing the configuration file and loading those parameters at startup.  This can be tricky since a bad combination will stall the camera, but I can verify the settings in JAI Control Tool first.  There is also an SDK that is shipped with the cameras, so I may be able to use those commands.  I haven't learned C/C++ yet, but I have teammates who have.

  • 802.1x access points using ISE for trigger

                       We are deploying AP's with 802.1x ports. We do not want ot have static AP ports. When plugged into a switch port with 802.1x configured the AP does not kick up the smart port trigger. How do I link the trigger from ISE to send the response for the trigger on the swutch to reconfigure the switchport for an AP?
    thanks,

    Hello,
    Please check this link for "802.1x using Cisco ISE", it may help you in this.
    https://supportforums.cisco.com/docs/DOC-29409

  • Simulate Delay..!!

    Dear All,
    I'm having a lab and i have 2 gateways connected via back to back E1 link to simulate a leased line..
    what i want to do is to simulate delay on this link just to see if the call will be transfered to the pots instead of the voip call leg due to detecting delay, packet loss and jitter and based on this the switching will take place..
    Any helpful hints..?!!
    Regards,
    Tamer Bayomy

    Hi !
    If you can use Ethernet instead of E1 (and make the Ethernet look like E1 with the help of a traffic shaper)...
    you should look at NISTNet ( http://is2.antd.nist.gov/itg/nistnet/ ) which is a network emulator which is able to emulate a network with tuneable variable delay, fixed delay, drop, etc...
    It runs on a linux box with 2 ethernet interfaces.
    Mike

  • How to sync and/or trigger NI-DAQmx devices in LabView

    I am using LabView 2009 with a cDAQ-9174 chassis that includes a NI 9263 analog out device and a NI 9222 analog input device. I've learned from the examples that I can sync the devices to the same sample rate. But I am looking to sync the devices such that the exact moment I start acquiring input data on a 9222 channel, the 9263 starts generating an output. Secondly, I am interested to know when using just this equipment, is there a way to very accurately trigger one device off the other with some delay. So in summary I am interested in two behaviors:
    1) Synchronize a 9263 output channel to a 9222 input channel, such that the start of a 9222 analog input measurement triggers a 9263 analog waveform output. and,
    2) Have the 9263's analog output start for example EXACTLY 100 samples (provided sample rates are synced) after the 9222 starts collecting data. In other words, a delayed trigger.
    I imagine this is preferably done in hardware such that no software delays occur. Is this possible using these devices? Or do I need some external clocking mechanism?
    Thank you!
    Solved!
    Go to Solution.

    Hi tzoom84,
    Going back to your original questions,
    To do 1) and 2), I don't think you need to use counters.
    The shipping example LabVIEW 2010\examples\DAQmx\Synchronization\Multi-Function.llb\Multi-Function-Synch AI-AO.vi is probably a good starting point for 1). (Possible problem: if you're using multiple AI timing engines at the same time, this example's assumption that the AI task has "ai/StartTrigger" is not valid. If this turns out to be an issue, you may need to replace Get Terminal Name With Device Prefix.vi with the DAQmx Trigger >> Start.Term property and add a call to reserve the task.)
    For 2), add the DAQmx Trigger >> Start.Delay and DAQmx Trigger >> Start.DelayUnits properties to the AO task:
    If you really do need to use counters, you can do so without the NI 9401 by using cDAQ1/_ctr0 through cDAQ1/_ctr3, which are internal channels: How do I Access Internal Channels on any DAQmx Device?
    Brad
    Brad Keryan
    NI R&D

  • How to introduce dead time in analog signals?

    Hi, I'm new to LabVIEW and I want to introduce a dead time in the analog signals that I have attached in this post. The time difference varies, so I could not introduce a time delay in the signal
    for instance, the attached waveform, i want the first signal to be seperated from the second signal. so i got 2 different signal at the end of the day for analysis. during the no signal time no signal is not recorded. 
    thanks in advance for all the help 
    Attachments:
    Time delay.JPG ‏37 KB

    Hi skydew,
                     It is possible to introduce a delay in an analog output. This is best done by specifying a trigger and then beginning the output when that trigger occurs.Below is a link to an example program developed for LabVIEW that waits a specified number of seconds and then triggers analog output to begin.I think this is what you mean as dead time.Take a look at this example and try to build from it for your code. If you have any further questions on this issue, let me know.
    Analog Input with Delayed Analog Output
    Also see some links below:-
    http://forums.ni.com/t5/LabVIEW/Finding-the-time-delay-between-two-signals/td-p/2125100
    http://forums.ni.com/t5/LabVIEW/calculating-time-delay-between-two-signals-aquired-from-DAQ/td-p/849...
    Thanks as kudos only

  • How to send TTL output AND acquire AI voltage data using USB-6211

    Hello,
    I am relatively new to Labview, so please bear with me.  I have a research application involving 2 pressure transducers and a high-speed camera.  I wish to acquire analog voltage data from the 2 pressure transducers.  However, at the start of the acquisition, I will need to send a single TTL output to trigger the camera.  This TTL pulse must be sent out at exactly the same time that the AI acquisition begins, in order to ensure that my 2 pressure measurements and camera images are 'synchronized' in time.
    Is this possible on the USB-6211 running with LabView 8.20?  I currently have a fairly simple LabVIEW vi that uses a software trigger to start an AI acquisition - I have attached it with hopes that it may help anyone willing to assist me.  I would prefer to be able to simply add something to it so that it will output a TTL pulse at the start of the acquisition.  
    Thank you in advance.
    Regards, Larry
    Message Edited by Larry_6211 on 12-19-2008 11:24 AM
    Attachments:
    USB6211_v1.vi ‏212 KB

    Hi All,
    I'd like to clear a few things up. First, you'll find that if you try to set the delay from ai start trigger and delay from ai sample clock to 0, you'll get an error. Due to hardware synchronization and delays, the min you can set is two. Note that when I say two, I am referring to two tick of the AI Sample clock timebase, which for most acquisitions is the 20MHz timebase. I modified a shipping example so you can play around with those delays if you want to - I find that exporting the signals and looking at them with a scope helps me visualize what is going on. The Manual has some good timing diagrams as well but it looks like you've already hit that. The defaults would give you a delay of  250ns from the start trigger - is this too high for your situation? What is an acceptable delay? I tend to think that "exactly the same time" is a measure of how precise rather than an absolute (think delays in cable length making a difference.)
    With all that in mind, I see a few options:
    Start your camera off of the AI start trigger (an internal signal) and just know it is 250 ns before your first convert. 
    Export the convert clock to use as a trigger. This assumes your camera can ignore the next set of convert clocks.
    More complicated option: Internally you have an ai start trigger, sample clock and convert clock. From your start trigger to the first convert is 250ns but if you export your convert clock you're going to get future convert clocks as well. One option would be to generate a single triggered pulse using a counter (start with the  Gen Dig Pulse-Dig Start.vi example) with the AI start trigger as the trigger for the counter, an initial delay of 250 ns, and a high time of whatever you want it to be. This should give you a singe pulse at very close to same time (on the order of path delays) as your first convert clock. 
    Hope this helps, 
    Andrew S
    MIO DAQ Product Support Engineer
    Getting Started with NI-DAQmx
    Measurement Fundamentals
    Attachments:
    Acq&Graph Voltage-Int Clk.vi ‏37 KB

  • Labview remote control and data transcation

    Hi,
    We have a complicated timing control system written in labview
    6.1 running on our lab PC-1.  Since it is not easy to upgrade to a most
    recent version and limited PCI-slots, and we need to add a few device
    control programs (written in C++ or updated labview), we are thinking of
    an ultimate control system with two PCs and let the old timing program
    be a "slave" of "master" computer PC-2.  OS for both PCs are windows XP.
    The communication between the two PCs are
    fairly straight forward: suppose on PC-1, the program breaks into two
    parts: part A and part B. PC-2 sends selection messages to PC-1 to
    determine which part needs to be running during a particular shot. It is
    true that to our best interest we might need  PC-2 to send data to
    PC-1, in case for a particular run we have to change some data values of
    timing program. 
    The problem might appears to be a piece
    of cake to you experts. It is worth mentioning that on PC-1, any
    changes made to the timing program is nontrivial due to its structual
    complexity. The program is DAQ based (finite) pattern generation (using a
    lot of PCI 6534/PCI 6713/counters, external clocking). 
    The
    reqiurement for speed is not high. PC-2 can in general wait for 1
    second before PC-1 receives a message and start to output.   On the
    other hand, the smaller the delay, the better. 
    We have
    considered a few communication options like TCP/IP, RS232 ports, etc. 
    But before we start implementing the changes, we would really appreciate
    your suggestions/comments! 
    Thanks very much in advance
    for your time! 
    Kunyan

    Kunyan,
    there are really several different ways to solve this. The major question is: Is this a handshake protocol or simple "notifications"?
    If it's simple notifications, things are getting quite easy.
    You can use TCP/UDP/DataSocket/STM if there is a network connection between both PCs. Otherwise, RS232 might be a good idea as well for sure.... but i suggest a network technology in order to reduce delays.
    STM is a protocol built up on TCP and even if you do not use it (it is not available for LV 6.1!), it might get you some ideas about defining a custom protocol.
    hope this helps,
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Creation of new oracle Database 10g in express edition

    I am new to oracle 10g express edition.
    Please let me know which user is able to create Database & Table, views, firing update ,Insert, Delete commands.
    what is the syntex before creating Database / user.
    What is the syntex for creation Database.
    After creation of database, can we straitaway create Tables and any other objects.
    Please let me know thr. SQL command example.
    Ajit

    user13349882 welcome,
    I suggest firstly read a few what is database, kind of databases, what is structure of database, then database structures, such as table, view, procedure, function, package, sequence, trigger etc
    According follow below links:
    http://searchsqlserver.techtarget.com/definition/database-management-system
    http://mdameryk.tripod.com/OrclOverview.htm
    http://www.utexas.edu/its/products/oracle/
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/intro.htm
    Hope after finish this material you will get essential information, good luck. Note, periodically use OTN forum, you will find out more necessary info which was asked before and several DBAs answered it.

  • Share invitation emails

    Hello.
    I have SAP Mobile Documents SP3, rhel 6, unicode kernel, 7.4sr2.
    I have no problem with sending emails using Java Mail Service on English.
    But when I try to create invitation.html and invitation.properties on russian using utf-8 or unicode encoding for these files then I can't add members to the share.
    Also I read the documentation and found the posibility to set the charset property.
    I returned the encoding for invitation.properties and invitation.html to ANSI and tried to set property charset="text/html;charset=UTF-8" and also unable to add share members.
    I think that documentation contain mistake in the name of property "charset" - because generally it calls as "content".
    I renamed the "charset" name to "content" and tried to add members and it was succesfull. That's why I really think that documentation contain mistake.
    And finally I changed only invitation.html file - I put russian text there and set utf-8 encoding for these file.
    As a result - the server succesfully sent the email but all russian words were like ????????.
    So, my questions are:
    1. Does SAP Mobile Documents support unicode/utf-8 encoding for emails? If no - when? If yes - could you please send me all the details how to set it up?
    2. It seems to me that property "charset" in the documentation should be renamed to "content"? Is it true?
    Boris.

    You can use the Sharing.DocumentSharingManager class with the method UpdateDocumentSharingInfo method in CSOM. The delay maybe the fact that when you assign a group to share the document with, then a timer job is queued to generate an email for each user
    in the group. These are then queued to send the email out via whatever smtp server you have set up for SharePoint. This can cause some delays.
    CSOM Link:https://msdn.microsoft.com/en-us/library/office/microsoft.sharepoint.client.sharing.documentsharingmanager.updatedocumentsharinginfo.aspx
    REST Link:
    http://sharepointfieldnotes.blogspot.com/2014/09/sharing-documents-with-sharepoint-rest.html
    Blog | SharePoint Field Notes Dev Tools |
    SPFastDeploy | SPRemoteAPIExplorer

Maybe you are looking for

  • Storage type to be assigned to particular storage location

    Hi, As per clients new business requirement, 2 storage locations of a plant are assigned to same warehouse. But as the storage types comes under warehouse they come under both the storage locations. Is there any configuration to assign specific stora

  • Mails disappearing after MacOSX 10.8.4 update

    Following the update to MacOSX 10.8.4, all new emails incoming from any accounts (i.e. not iCloud only) are disappearing. This affects the Mac, iPhone, iPad, etc. With Mail app shutdown, mails are coming correctly (not disappearing) on the iDevices (

  • Additional TAB in VT01N

    My requirement is to add an additional TAB in VT01N screen All I need to know is using which BADI or userexit this functionality can be achieved?

  • Path to jdbc libraries

    Which path is more appropriate for jdbc libraries? I'm using usr\sap\J2E\JC00\j2ee\cluster\server0\bin\system\ Is it right?

  • Please Help Me - Which 9800gt did I ordered - *Urgent*

    Hello, Ok so here's the problem: I've just ordered an MSI 9800GT OC(I live in Romania), I will get it tomorrow. The picture of the card(in the online shop) was the one with 4 heatpipes and an black fire-grill over the cooler. To be more exactly: If y