Association columns in DocumentSet

There are two content types - 'DocSets' for the document set and 'Docs' for the documents inside it. 'Docs' content type consists of two columns: 'Title' and 'Document Type' (Choice)
I created a reusable workflow and associated it with 'DocSets' content type. Then I created association column 'Document Type' from the 'Docs' content type to use it further in workflow.
When I try to use command 'Wait for Document Type to equal Agreement' nothing happens. Only the following record in history: 'Waiting on Document Type'
What I'm doing wrong? How can I use results of the association column?
Thanks

Hi,
I try to add the “Document Type 3” column to the “Docs” document content type, and then use the “Docs” content type in the “Test DocSet”.
However, if we open the Library Setting, under Columns, we can see that the “Document Type 3” column is used in “Docs”.
We can create Reusable Workflow associated with the 'Test DocSet' Content Type as below:
The result is as below:
We can see that the value of the “Document Type 3” column in current item is blank.
However, we cannot change its value because it is only used in “Docs”.
In the other word, no matter how we can the value of the “Document Type 3” column in the “Docs”, the “Document Type 3” column in current item remain blank.
So the status of workflow will be “In Progress” to wait on Document Type 3 and it cannot log the message.
Thank you for your understanding.
Best Regards,
Linda Li
Linda Li
TechNet Community Support

Similar Messages

  • Association Columns in Sharepoint 2013 Reusable Workflow

    at http://msdn.microsoft.com/en-us/library/jj728659.aspx it says Association columns are not available in Sharepoint 2013 Workflow 
    But http://msdn.microsoft.com/en-us/library/dn292551.aspx#bkm_07 is suggesting to use Association columns in Sharepoint Server 2013. Doesn't Sharepoint Server 2013 uses Sharepoint 2013
    Workflows?
    Cheers, IXI solution

    Hi,
    According to your description, my understanding is that you want to know if the Association Columns can be used in SharePoint 2013 workflow.
    Per my knowledge, the Association Columns feature is available only on the SharePoint 2010 Workflow platform.
    The article in the link
    http://msdn.microsoft.com/en-us/library/dn292551.aspx#bkm_07 introduces the best practices for developers using Visual Studio to create workflows in SharePoint 2013.
    We can create 2010 workflows in SharePoint 2013 and we can use the
    Association Columns feature in SharePoint 2010 workflows.
    We can also integrate features from the SharePoint 2010 Workflow platform into the new SharePoint 2013 Workflow platform.
    To do this, create a SharePoint 2010 Workflow by choosing the SharePoint 2010 Workflow platform; create a SharePoint 2013 Workflow by choosing the SharePoint 2013 Workflow platform; and then use the
    Start a list workflow and Start a site workflow actions in the SharePoint 2013 Workflow to call the SharePoint 2010 Workflow.
    If you have any questions, please feel free to reply.
    Thanks
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • SPD 2010 Workflow: Issue with Associated Column

    We have a SP2010 Workflow that is attached to a Content Type and mapped in the Retention Policy settings. This workflow uses a DateTime column which is used within the Workflow and hence the DateTime column is used within the Workflow as an Associated Column.
    The issue being faced is that when we manually run the workflow, it runs successfully and is able to get the value of the associated column. When the Retention Timer job starts the workflow, we find that the Workflow is not able to get the value of the Associated
    Column (this we realized by logging messages to a List).
    Any pointers/guidance on how to overcome this issue is much appreciated.
    Blog: http://dotnetupdate.blogspot.com |

    Thanks for the suggestions. 
    From the debugging there is no issue from a functionality perspective or the date calculations. It works perfectly well when we manually run the workflow or if I set the workflow to start when an item is uploaded.
    The behavior is different when the same workflow is getting triggered from a Retention Policy as a Stage. Is this something to do with access permissions or security context when a SPD 2010 Workflow gets triggered by a Timer Job/Retention Policy? 
    Regards,
    Vikram
    Blog: http://dotnetupdate.blogspot.com |

  • What's the usage of the associated columns?

    In business model, logical columns are associated to a level of dimension. But the columns are nether primary key nor used for display. What's the purpose of these associated columns?

    To flesh out our example a little more.
    You have a Day Dim, and a Month Dim table. Your month name logical column is mapped to both tables via different Logical table sources. Your content levels are set accordingly for those LTS's.
    You have a fact table at grain Day and an aggregated fact table at grain Month. You have a measure (dollars, quantity, whatever) mapped to both physical tables through two LTS's with content levels set (like the Time Dimension).
    So when you query Month Name and Dollars, the BI Server uses the aggregate table as it can serve up both the dimension attribute and the measure from the aggregate table. It knows its more efficient because the ratio of the number of elements you defined in the hierarchy tells it that there is approx 30 times less data in the month aggregate, by the ratio of 12 months to 365 days.
    So far so good.
    Lets introduce more dimensions, a.n.other dimension is joined to the base fact table but is NOT in the Month aggregate. You model it all as per normal, when you drop this extra dimension into your report, the BI Server recognises that there are no aggregate sources that can give the correct answer across all required fields in the report so drops to the next best fit at the grain requested in the report, in this simple case it would default back to the base fact table as that is the only fact table capable of serving up the month and any attributes from the other dimension. In more complex models you might have a few aggregate tables with both higher grain (higher levels) or less dimensionality (less rows). The existance of physical layer joins tell the BI Server what can be joined for the correct dimensionality, the setting of LTS levels tells it which aggregate tables would be more efficient.
    Dont get confused with setting a level on a measure - thats something different (level based totals, % of Year etc. etc)
    Got it?
    These might help :
    http://gerardnico.com/wiki/dat/obiee/fragmentation_level_based
    http://www.rittmanmead.com/2006/11/aggregate-navigation-using-oracle-bi-server/
    http://obiee101.blogspot.com/2008/11/obiee-making-it-aggregate-aware.html
    let us know if any more Q's.

  • How do I update columns in a library using PowerShell during a file upload?

    I am trying to put together a script that will do a bulk upload of files along with associated metadata into a SP library. The first part of the requirement is to upload .pdf files while grabbing the metadata from the file name. Currently, my script does
    the uploads, but it it does not update the fields with the metadata it is getting from the file names. Here is what my script curently looks like
    if((Get-PSSnapin "Microsoft.SharePoint.PowerShell") -eq $null)
    Add-PSSnapin Microsoft.SharePoint.PowerShell
    #Script settings
    $webUrl = "http://llc-hdc-spfe1d:19500/sites/SampleRecordCenter/"
    $docLibraryName = "My Library"
    $docLibraryUrlName = "MyLibrary"
    $localFolderPath = get-childitem "C:\test" -recurse
    #Open web and library
    $web = Get-SPWeb $webUrl
    $docLibrary = $web.Lists[$docLibraryName]
    $files = ([System.IO.DirectoryInfo] (Get-Item $localFolderPath)).GetFiles()
    ForEach($file in $files)
    if ($localFolderPath | where {$_.extension -eq ".pdf"})
    #Open file
    $fileStream = ([System.IO.FileInfo] (Get-Item $file.FullName)).OpenRead()
    # Gather the file name
    $FileName = $File.Name
    #remove file extension
    $NewName = [IO.Path]::GetFileNameWithoutExtension($FileName)
    #split the file name by the "-" character
    $FileNameArray = $NewName.split("_")
    $check = $FileNameArray.Length
    $myArray = @()
    foreach ($MetaDataString in $FileNameArray)
    #Add file
    $folder = $web.getfolder($docLibraryUrlName)
    write-host "Copying file " $file.Name " to " $folder.ServerRelativeUrl "..."
    $spFile = $folder.Files.Add($folder.Url + "/" + $file.Name, [System.IO.Stream]$fileStream, $true)
    if ($FileNameArray.Length -eq 3)
    #populate columns
    $spItem = $docLibrary.AddItem()
    $spItem["FirstColumn"] = $myArray[0]
    $spItem["SecondColumn"] = $myArray[1]
    $spItem["ThirdColumn"] = $myArray[2]
    $spItem.Update()
    elseif ($myArray.Length -eq 4)
    #populate columns
    $spItem = $docLibrary.AddItem()
    $spItem["FirstColumn"] = $myArray[0]
    $spItem["SecondColumn"] = $myArray[1]
    $spItem["ThirdColumn"] = $myArray[2]
    $spItem["FourthColumn"] = $myArray[3]
    $spItem.Update()
    #Close file stream
    $fileStream.Close();
    #Dispose web
    $web.Dispose()
    The .pdf files have the same naming convention like "first_second_third.pdf" and "first_second_third_fourth.pdf"...I want to grab each part of the file name, and put that data in the associated column in the library. Right now, am getting
    my file name and storing that information in an array, but my code isn't updating each column as I hope it will. What am I doing wrong here?
    Thanks for the help.

    Just figured out what was wrong with my logic...this does the trick.
    if ($localFolderPath | where {$_.extension -eq ".pdf"})
    $fileStream = ([System.IO.FileInfo] (Get-Item $file.FullName)).OpenRead()
    $FileName = $File.Name
    $NewName = [IO.Path]::GetFileNameWithoutExtension($FileName)
    $FileNameArray = $NewName.split("_")
    $folder = $web.getfolder($docLibraryUrlName)
    $spFile = $folder.Files.Add($folder.Url + "/" + $file.Name, [System.IO.Stream]$fileStream, $true)
    $spItem = $spFile.Item
    if ($FileNameArray.Length -eq 3)
    $spItem["FirstColumn"] = $FileNameArray[0].ToString()
    $spItem["SecondColumn"] = $FileNameArray[1].ToString()
    $spItem["ThirdColumn"] = $FileNameArray[2].ToString()
    $spItem.Update()
    elseif ($FileNameArray.Length -eq 4)
    $spItem["FirstColumn"] = $FileNameArray[0]
    $spItem["SecondColumn"] = $FileNameArray[1]
    $spItem["ThirdColumn"] = $FileNameArray[2]
    $spItem["FourthColumn"] = $FileNameArray[3]
    $spItem.Update()
    $fileStream.Close();

  • Report region with column link that opens a pdf doc based on report query

    Hello
    I'm building a report table that displays info about a customer - simple select - and, for each record, has associated column links based on report queries that receive ID as parameter. When clicked, it opens the report in pdf extension. My problem here is how to pass the ID as a parameter to that report query considering i'm using a report table and that there are no items in page 71...
    This is the report query i'm using:
    select initcap(a.customer) customer
    , initcap(a.address) address
    , initcap(a.rep) rep
    , (select initcap(b.city)
    from portal_records b
    where b.contrib=a.contrib
    and b.year=to_char(sysdate,'yyyy')) city
    , (to_char(a.datereg,'dd')||' de '||to_char(a.datereg,'Month')||' de '||to_char(a.datereg,'yyyy')) datereg
    from portal_authorizations_cve a
    where a.id=:P71_ID ???????????????
    I thank in advance all your replies!!

    Hello
    First of all, let me compliment your for your demo application... It's awesome!
    I've looked into your sample (page 15) and, as far as i see, it opens a document saved in a table's column. I don't want the file to be saved there but generated when the user clicks on that particular link... So i still have the problem of how to pass the right ID as a parameter considering there is no page item on that page...
    My javascript knowledge is little so i ask you: when clicking the link, is there any way of opening a window with the url f?p=&APP_ID.:0:&SESSION.:PRINT_REPORT=Authorization_CVE and the ID as a parameter?
    I thank in advance!

  • Update a site column dynamically to Approval Status

    Hello all this is what I have;
    InfoPath form with Form Status field (Form Status promoted to SP form library as site column) form switches to read only view when Form Status = Approved
    SharePoint form library set to require content approval
    Copied and modified OOB Publishing Approval workflow with Association column, Form Status added,
    As it stands, the work flow runs and updates the list item, Form Status, to Approval Status, unfortunately Approval status is set beck to pending (which is correct behavior I assume) I need some help! can I insert the Update list item action in the workflow
    task steps? I want Form status to start as pending and then reflect whatever Approval Status value is.  This is my first attempt at modifying an OOB workflow, is this the right approach?
    Thanks for any help, I have tried several solutions but I am definately missing something. Should the workflow restart on item change? should I have versioning turned on? all I am trying to accomplish is have the form switch to read only if it's approved and
    then if for some reason someone needs to make a change I need Form Status and Approval Status to have the same value.

    Hi,
    If you only want to approve content, you can enable content approval.
    If you want to manage versions, you need to enable content approval first and then enable version.
    More information:
    Require approval of items in a site list or library
    Enable and configure versioning for a list or library
    Best Regards,
    Linda Li
    Linda Li
    TechNet Community Support

  • RemoteApp 2012 and File Type Associations

    Hi, All :),
    the scenario is like this -  2 x Server 2012: one as the RD Connection Broker and RD Web Access and one as the RD Session Host (clients are Windows 7). ALMOST everything works: there is one collection, several applications, all successfully thrown into
    the RemoteApp and Desktop Collections by GPO (Powershell script - http://gallery.technet.microsoft.com/ScriptCenter/313a95b3-a698-4bb0-9ed6-d89a47eacc72/).
    But one thing doesn't work - file type associations. Well, I want the client without, for example, Office installed to open the RemoteApp application (Excel) by clicking on the XLSX file. When looking at the application properties -> File Type Associations
    -> Current Associations column, I see the name of the collection, which includes Excel ... but it does not work. After updating the RemoteApp and Desktop Connections on the client, the RDP file with Excel noticed a new line:
    remoteapplicationfileextensions: s:. csv,. xls,. xlsx,. xltx
    How do I make it work? I know that in RemoteApp on Server 2008R2 you could export the MSI file that took care of File Type Associations, but what to do in case of Server 2012?

    Unfortunately, the feature to install file type associations for RemoteApp programs in your GPO-pushed RemoteApp and Desktop Collection is only supported on Windows 8 clients. Several improvement were made over the old MSI-based file type associations, and
    those improvements rely upon changes in the Windows 8 OS.
    Ultimately file type associations are controlled with registry keys, all of which are documented on MSDN. RemoteApp programs are pretty much the same as any other local program, except the file type is associated with mstsc.exe with some special parameters.
    Although the way we register those file type associations in Windows 8 won't work for you, the way the old MSIs used to register them would. You could in theory register the RemoteApp file type associations yourself in the same way that one of the old MSIs
    would have done (you could use some sort of GPO-pushed script on the clients).
    The easiest way to do this would be to track down one of those old MSIs, install it, and do a before-and-after view of the registry to see what changes it makes.
    Hope that helps,
    Travis Howe | RDS Blog: http://blogs.msdn.com/rds/default.aspx

  • Flat Files - Column Concantenation

    I was just wondering if anyone has ever written something or knows of a source where one can get a list of handy hints of dealing with flat files in SSIS so as to avoid common drawbacks associated with loading data from flat files. In my experience
    I have noticed that flat files can be such a headache. I have worked with flat files for a wile now and I’m relatively comfortable with them. Occasionally I hit on stubborn issues. My latest challenge is that I’m getting two of the columns from a CSV file
    being concatenated in the intended destination. Please see the illustration below.
    Surprising thing is that the concatenated columns have a coma separating them.
    The rows affected by this end up with associated columns being shifted out of proper mapping.
    In the illustration below the first two rows are the offending rows and the last two rows show what the expected results should look like.
    Data type is VARCHAR on all columns
    ProductCode
    ProductName
    Quantity
    BestBeforeDate
    NULL
    BRD
    Bread
    13,2014-03-06
    NULL
    MLK
    Milk
    5,2014-03-15
    BTR
    Butter
    4
    2014-05-02
    EGG
    Eggs
    12
    2014-03-12
    The following are the things which I have tried without success yet
    Ticking box “Retain NULL Values” box on the Flat File Source component
    Using double quotes for Text Qualifier when exporting data to CSV
    Using double quotes for Text Qualifier when loading data from CSV
    Not using anything for Text Qualifier
    Exporting data to a Raw File destination and importing data from a Raw File source hoping that raw file might preserve the original format.
    Please note that the data which is being exported has none of the values from the source columns with comas in them. All values in all columns do not have any special characters. They are simple alphanumeric.
    Suggestions will be warmly welcome.
    Many thanks,
    Mpumelelo

    Thank you for your responses
    Jonathan – I’ve never used a hex editor before. Is there any other way round that other than hex editor?
    B3nt3n – What is your delimiter? – I don’t know if I understand your question correctly. I am using the default settings on the Flat File component. The only places where I have made changes are addition of the
    double quotes to the Text Qualifier areas as well as putting a tick on the “retain null values …” option. Everything else is default.
    A csv would have commas. But you are saying there are no commas when you open the .csv in notepad?
    – I meant there are no comas in the original data values as they are on the table before being exported to the csv file. That is, none of the values on a given column has a coma in it. However, there are comas on the csv file itself as you have rightly said
    about csv file formats, but not on the table for this data that I am dealing with. 
    Mpumelelo

  • Data Modeler 3.0 EA2: Column type changes to unknown

    Drawing a relational model
    Create table TABLE_1
    add Column_1 - number - primary key
    Create table TABLE_2
    add Column_2 - varchar2 - primary key
    create foreign key TABLE_2_TABLE_1_FK
    Create unique constraint TABLE_2 for columns Column_2 , TABLE_1_Column_1
    Create a foreign key TABLE_1_TABLE_2_FK
    Change PK / UK Index from TABLE_2.TABLE_2_PK to TABLE_2.TABLE_2__UK
    Select Associated Columns and select Child columns
    Parent column           Child column
    Column_2                 TABLE_2_Column_2
    TABLE_1_Column_1       Column_1Press ok - and the column <B>TABE_2.TABLE_1_Column_1 datatype becomes UNKNOWN </B>. It was number.
    Situation before problem happens looks like following. If the model is reverse engineered from this DDL the problem does not happen.
    CREATE TABLE TABLE_1
         Column_1 NUMBER  NOT NULL
    ALTER TABLE TABLE_1
        ADD CONSTRAINT TABLE_1_PK PRIMARY KEY ( Column_1 ) ;
    CREATE TABLE TABLE_2
         Column_2 VARCHAR2  NOT NULL ,
         TABLE_1_Column_1 NUMBER  NOT NULL
    ALTER TABLE TABLE_2
        ADD CONSTRAINT TABLE_2_PK PRIMARY KEY ( Column_2 ) ;
    ALTER TABLE TABLE_2
        ADD CONSTRAINT TABLE_2__UN UNIQUE ( Column_2 , TABLE_1_Column_1 ) ;
    ALTER TABLE TABLE_2
        ADD CONSTRAINT TABLE_2_TABLE_1_FK FOREIGN KEY
         TABLE_1_Column_1
        REFERENCES TABLE_1
         Column_1
    ;

    Hi Rafu,
    I logged bug for that.
    Import of DDL is not the only way to get it working , just need to follow the steps in DDL. TABLE_1_Column_1 column is automatically created FK column when you work manually. If you create it before FK (set data type) and then select it as FK column you'll have the same behavior as in import of DDL.
    Philip

  • Dynamically display/hide columns according to user inputs

    Hi,
    I need to display or hide columns in reports based on user input. ie. I have several prompts, if user doesn't input values, then the associated columns will not be displayed in the report. Is there a way to achieve this in BO XI R2. Greatly appreciated for your help.

    Hi Andy,
    You can't hide them totally, but you can get close using alerters.
    Build an alerter (probably a formula one) that put the value ="" in the field.
    Now the formula should test the userresponse for that column,
    based on the contents of it (filled in --> false, otherwise --> true).
    Apply the alerter to all cells of the column. It will empty completely when the prompt was not filled.
    Just set the width of the column to automatic (best use the header cell only) and the minimum width on zero. It will shrink the width when empty, making it seem as if hidden.
    Good luck,
    Marianne

  • List of indexes and columns for a database.

    Hi
    Do you know the SQL command to get the list of indexes and associated columns for all tables for a given database ?
    The following only shows me the table and index name but I would also like to get the colums for each index
    SELECT o.name,       i.name  FROM sysobjects o  JOIN sysindexes i    ON (o.id = i.id)
    Can you pls help
    Thanks
    H.

    There isn't a single command that will do that.
    There is the sp_helpindex stored procedure which will give you the information on indexes one table at a time, you could call it in a loop, but there is other information in there as well, so the output would be messy.
    You can look at the source code for sp_helpindex to find out how it decodes the key column names. 
    use sybsystemprocs
    go
    sp_helptext sp_helpindex
    go
    The core of it is this loop, which builds up a list of the column names in @keys, a varchar(1024) declared earlier.
            **  First we'll figure out what the keys are.
            declare @i int
            declare @thiskey varchar(255)
            declare @sorder char(4)
            declare @lastindid int
            declare @indname varchar(255)
            select @keys = "", @i = 1
            set nocount on
            while @i <= 31
            begin
                    select @thiskey = index_col(@objname, @indid , @i)
                    if (@thiskey is NULL)
                    begin
                            goto keysdone
                    end
                    if @i > 1
                    begin
                            select @keys = @keys + ", "
                    end
                    /*select @keys = @keys + index_col(@objname, @indid, @i)*/
                    select @keys = @keys + @thiskey
                    ** Get the sort order of the column using index_colorder()
                    ** This support is added for handling descending keys.
                    select @sorder = index_colorder(@objname, @indid, @i)
                    if (@sorder = "DESC")
                            select @keys = @keys + " " + @sorder
                    **  Increment @i so it will check for the next key.
                    select @i = @i + 1
            end
            **  When we get here we now have all the keys.
            keysdone:
                    set nocount off
    -bret

  • Would PL/SQL Associative Arrays solve my problem

    Hi guys,
    I'm new to PL/SQL development really, although i have some limited knowledge that i've relied on for the last couple of years to get by. Anyway, i'll try my best to describe my problem and how i'd like to solution it ...........
    I have a table of information that holds a column for decscriptive names of payments and another column holding a difference value, for example :
    employee_number | payment_name | difference
    00001           | salary       | 200.20
    00001           | shift        | 20.21
    00002           | salary       | 10.01
    00002           | shift        | 5.02
    00003           | salary       | 15.02
    00003           | shift        | 4.00I'd like to manipulate the way this data is presented, via DBMS_OUTPUT in a summary fashion counting the number of differences between ranges, for example :
    payment_name | 0.00 to 10.00 | 10.01 to 100.00 | 100.01 to 99999.99
    salary       |               | 2               | 1
    shift        | 2             | 1               |I thought it might be possible to use an approach in PL/SQL to mimic this table structure and populate it via a cursor looping through my initial recordset and posting the total count in the associated columns as required. Once the cursor had finished populating the PL/SQL table / array etc i could base my DBMS_OUTPUT on this data.
    Or am i completely barking up the wrong tree, would there be a better, more efficient way to solution the problem. I've been reading up on PL/SQL Collections http://docs.oracle.com/cd/B10501_01/appdev.920/a96624/05_colls.htm#19661 but can't really determine is this is a) the correct approach or b) should i try using an associative array, nested table or varray?
    Thanks in advance guys, just need a pointer in the right direction.

    It sounds like you can just pivot the data
    SELECT payment_name,
           SUM( CASE WHEN difference BETWEEN 0 and 10 THEN 1 ELSE 0 END ) "0.00 to 10.00",
           SUM( CASE WHEN difference BETWEEN 10.01 and 100 THEN 1 ELSE 0 END ) "10.01 to 100.00",
           SUM( CASE WHEN difference BETWEEN 100.01 and 99999.99 THEN 1 ELSE 0 END ) "100.01 to 99999.99"
      FROM table_name
    GROUP BY payment_nameYou can then do whatever you want with the data this query returns.
    Justin

  • Fields missing in Designer 2010 workflow

    I created few yes/No column through Association Columns in Designer but now the value is missing and my workflow crashes. i fixed it saved and published the workflow yesterday. when i opened the workflow this morning i see the same issue.
    when i tried to modify i see this

    Hi Madnan,
    I tested the same scenario per your post and I got the same results as you got.
    It seems that the association column is designed for reusable workflows, and if we use association columns in list workflows, the issue will occur.
    In the .xoml file of the workflow, the reference of the associated column should be like the image1, however it will change to image2 in list workflow after refresh the workflow:
    So the error occurs when we run the list workflow.
    I recommend to use association column in reusable workflows instead of list workflows.
    More references about association column:
    http://blogs.msdn.com/b/sharepointdesigner/archive/2011/05/02/association-columns.aspx
    https://msdn.microsoft.com/en-us/library/office/dn292551.aspx
    Best regards.
    Thanks
    Victoria Xia
    TechNet Community Support

  • Sharepoint programmaticaly add list item under current user - 0x80070005 (E_ACCESSDENIED))

    Please help me.
    In our company, we developed an application under the Apple device to work with the company's internal services , including it in this application it is possible to send messages to the corporate blog portal, which is deployed on Sharepoint 2013 .
    The application architecture is as follows:
    Server on which you installed Sharepoint 2013 is called "SPS01", on the same server is deployed on IIS web application called "WS",
    application "WS" taken in a separate application pool from Sharepoint 2013 .
    All Apple client devices communicate with Sharepoint 2013 through the web application "WS". For us it is important that when applying for Sharepoint 2013 with mobile
    devices, Apple remained the security context of the authenticated user, as access to documents stored in Sharepoint 2013 demarcated .
    For what would be maintained security context authenticated users , we applied techniques such as impersonation and kerberos .
    Request goes like this:
    Apple client connects to the Web application "WS", on the "WS" configured
    to connect to Kerberos enabled and impersonation . (Apple iOS 7 support kerberos ) , then all customer interaction with Sharepoint 2013 going through "WS". Kerberos
    we use that to get rid of the problem of "double hopes" that occur when a web application "WS"
    refers to the other network resources that are beyond the server "SPS01".
    Example:
    Web application "WS", which is on the server "SPS01", refers to a
    file server «FS01», to get a list of documents from the usual file balls through kerberos and impersonation request comes on «FS01»
    from the user who is authenticated by web application «WS».
    But there was a problem:
    If the user enters the Sharepoint 2013 through the browser on your computer, it can successfully keep there blog post .
    If the user is drawn to Sharepoint 2013 from a mobile device Apple, and tries to leave a message in the blog , the error appears : Access Denied. Exception: Access is denied.
    (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))
    Empirically by the following was revealed: I tried to give you a right to - it did not help:
    Administrator farm
    SQL database config and content - owner
    Administrator of User Profile Services
    Local Administrator
    BUT!
    When deploying Sharepoint 2013 group was created security domain - "domain \ farm_administrators" - which is in the local group Sharepoint 2013 - "Farm
    Administrators" If the user is included in the domain security group - "domain \ farm_administrators", then the user can successfully keep a blog
    post from a mobile device Apple.
    I want to note that this happens only through security group AD «domain \ farm_administrators»
    The question is, how can you determine what rights the special security group "domain \ farm_administrators" in Sharepoint 2013 , whereby the participants can successfully
    send messages to the blog with Apple mobile devices via the web application "WS".
    We also found a workaround , but it will not work as we would like.
    RunWithElevatedPrivileges
    http://msdn.microsoft.com/en-us/library/bb466220.aspx
    We can execute a part of the Web application "WS" with the highest privileges. In this case, the user was able to successfully send a message to your blog from your
    mobile device Apple.
    But when the code runs with the highest privileges - lost the security context of the authenticated user.
    In the author is shown as "System Account"
    public static XmlDocument addBlogMessage(string projectID, string text)
    XmlDocument response = new XmlDocument();
    string result = string.Empty;
    string webUrl = SPUrlUtility.CombineUrl(Constants.SITE_URL, projectID);
    using (SPSite site = new SPSite(webUrl))
    using (SPWeb web = site.OpenWeb())
    SPList blogMessagesList = web.GetList(SPUrlUtility.CombineUrl(webUrl, Constants.BLOG_MESSAGES_LIST_URL));
    SPListItem newBlogMessage = SPUtility.CreateNewDiscussion(blogMessagesList, "Сообщение");
    newBlogMessage[SPBuiltInFieldId.Body] = text;
    web.AllowUnsafeUpdates = true;
    newBlogMessage.Update();
    web.AllowUnsafeUpdates = false;
    result = "<response><userResult><code>Success</code><messageID>" + newBlogMessage["ID"].ToString() + "</messageID></userResult></response>";
    response.Load(new System.IO.StringReader(result));
    return response;

    Yes I make sure that the assigned field is filled as the workflow will not work unless that field is filled with a valid username.
    I am an administrator on this SharePoint site  and I am using several people to email to that are not admins on the same site.
    The issue seems to lie in the Workflow itself.  In the one step that I have "Send an email" you fill out what should go in the To: field in the email.  That is where it should say "Current Item:Assigned To" according to all the documentation I
    have read and what mine will only enter in the To: field is "Assigned To" only it is missing the key first part of where to look.  Even though when I go through the steps I have indicated that it the data should be found in the Current Item from
    the field of Assigned To and to return the information of email address, which is well with in the realm of this Action in Workflow. 
    I have been able to accomplish this in another workflow that is not meant to be Reusable and is tied to one list only.  This seems to only be an issue when trying to do this in the Reusable Workflow creation and execution and when I use the Association
    Columns.
    Any further ideas or suggestions?  Any would be appreciated.
    Regards,
    Lara K.

Maybe you are looking for