Importing a crawl

I tried several times to import a crawl into FCP 6.02 which had been generated in Chyron's Lyric program, with no success. We tried generating a targa sequence, but FCP won't import that. Then we exported the crawl out of chyron as a QTmov, used QTPro to compress it in the codec of the project (NTSC 29.97 DVCPro50), and upon import it was very herky-jerky with soft resolution. I used the crawl tool in FCP, and it looked fine - except it was very basic, and couldn't generate the good-looking buttons, logos, and colors that chyron's crawl could, let alone the animated drop down logo that initiated the crawl. so the producer wouldn't buy it. (darn picky producers).
any suggestions on how to import a smooth, high-quality crawl from a third party program? we're posting a race, so this is the running order crawl at the top of the screen. it's not continuous, but only needs to go in about every 25 laps or so. thanks, ef

Best advice I can think of to try is the same methodology I use going from After Effects to FCP.
Export from the 3rd party app as QT on the Animation codec at the highest settings.
Import that to FCP and let it do the render down to the sequence codec.
Might want to check under the prefs of the 3rd party app that the "project" settings and "timebase" are right for NTSC I do point out I am in no way familiar with the software you are using but am giving generic pointers as to where I'd look for a solution first.

Similar Messages

  • Can't import .RTF into Boris Title Crawl without crashing

    I laid out my titles in Word, saved as an .RTF and then when I clicked import in Boris Title Generator (crawl) and selected the .RTF file, FCP crashes.
    I tried opening a new empty project and doing the title there - same problem.
    I tried cutting my .RTF into two parts. Same problem.
    I've spent hours tonight first trying to layout in columns and better spacing in Boris (leading command made the lines crazy after render), so finally thought to import but is crashing.
    Any help very much appreciated.

    What format are you working in? If it's DV, simply create a new file in photoshop using the DV preset and then adjust the vertical dimension to whatever is appropriate. Post back if you want, and I'll look at some of the crawls I've done and see what they're dimensions are.
    If you want to send me your email address (my email is in my profile) I'll try and put together a small project with a end crawl or two from some recent projects, along with the photoshop files.
    The trick is to create a separate sequence for the end crawl with the field dominance set to none. You animate the title crawl in this sequence and then render out a selfcontained qt and bring it in to your complete sequence.

  • Crawler - Refreshing Previously Imported Documents

    I checked the box in red below and it updated the MetaData for a pdf that the customer said was not updating. I then re-ran teh crawl. The document updated the meta data BUT IT ALSOso marked all the documents as updatedLWill this checkbox always cause the documents to appears as UPDATED? Where do I specify how long to keep a document marked as UPDATED?Is it in a config file?
    Help on what checking the REFRESH THEMcheckbox does:
    To refresh the previously imported documents as specified on the Document Settings page, check refresh them. Generally, refreshing documents is the job of the Document Refresh Agent; refreshing documents slows the crawler down. However, if you changed the document settings for this crawler or changed the property mappings in the associated document types, refreshing documents updates these settings for the previously imported documents
    Advanced Settings
    <TD noWrapeight=25>
    Specify advanced options that affect how this Crawler imports content.
    <TD noWrapidth=10 bgColor=#ededed>
    Help
    Content Language
    <TD noWrapgColor=#ededed>
    The majority of content is in this language: AfrikaansAlbanianArabicBasqueBengaliBulgarianByelorussianCatalanChinese (Simplified)Chinese (Traditional)CornishCroatianCzechDanishDutchEnglishEsperantoEstonianFaeroeseFinnishFrenchGalicianGermanGreekGreenlandicHebrewHindiHungarianIcelandicIndonesianIrishItalianJapaneseKonkaniKoreanLatvian (Lettish)LithuanianMacedonianMalteseManx GaelicMarathiNorwegian BokmalNorwegian NynorskPersianPolishPortugueseRomanianRussianSerbianSerbo-CroatianSlovakSlovenianSpanishSwahiliSwedishTamilTeleguThaiTurkishUkrainianVietnamese
    Importing Documents
    Import only new documents.
    <TD noWrapidth=150 bgColor=#ededed>
    Do not import documents already imported:
    <INPUT CHECKEDalue=4 name=in_ra_AlreadyImported>by this Crawlerfrom this Data Source
    <TD noWrapidth=150 bgColor=#ededed>
    When revisiting previously imported documents:
    <INPUT CHECKEDalue=1 name=in_cb_RefreshThem>refresh them ( I CHECKED THIS)<INPUT CHECKEDalue=1 name=in_cb_RegenerateDelLinks>regenerate deleted links
    Crawler Tag
    <TD noWrapidth=150 bgColor=#ededed>
    Mark imported documents with the following Crawler Tag:

    Hi! Thanks for the info.
    I checked unclassified docs - it is not there.
    What seems to be happening is the source files are not going to the target, but a subfolder within the target yet the 'try to sort them into additional folders' check mark is not checked.
    The log says:
    Feb 13, 2007 2:57:33 PM- *** Job Operation #2 completed: Crawler successfully imported 8 cards into the catalog, rejected 0 documents, avoided reimporting 0 documents because their cards have been deleted, ignored 0 documents which have been previously rejected by the taxonomist, and ignored 8 documents imported by a previous operation. All successfully imported cards have been approved.(282609)
    Unfortunately this is happening on a Production environment so as soon as I open PTspy - it is jammed with a zillion things...
    V
    Computers are like Old Testament gods; lots of rules and no mercy. ~Joseph Campbell

  • Crawler - Importing security

    There seems to be a limitation on the importing of security by a crawler - If a group or user can be associated with a "domain" (for example, an LDAP), then it's not a problem - an ACLEntry object is created for each, containing a domain and the name.
    But what if a user or group was created in Plumtree? The following seems to suggest that the group/user ACL cannot be imported. (from Chapter 26 of the Enterprise Web Development Guide):
    o Import security with each document (only available if importing security is enabled in the Crawler Web Service editor): To use this option, the source repository users must have been imported into the portal and mapped in the Global ACL Sync Map. Only Read access is imported, since Write access in the back-end repository and the portal are not equivalent.
    Has anyone found a way around this limitation? I have tried setting up the ACLEntry objects using a null or blank string in the domain, but these options do not seem to work.

    In 5.0.2, you can add the Plumtree auth source to the "Prefix - Domain Name Map" section of the global ACL sync map. This lets you map from any domain to Plumtree users and groups.
    There is no way around this limitation in older versions.

  • Please Help!  Photo Import has Suddenly Slowed to a Crawl -- 19 hours

    I have transferred 1,000s of RAW photo files to my HD and external HD with no problem, until a few days ago. Suddenly, what used to take about 3 hours now takes 19-24 hours. I can't figure it out.
    I've changed card readers, I've tried different memory cards, I've even downloaded to external HDs and tried to upload directly to my HD. I've also hooked up my camera and tried to transfer the files via a USB cable (traditionally the slowest way). I've also tried to upload using different software programs: Lightroom, iPhoto and Bridge.
    I've also ran the Disk Utility on my HD and there weren't any problems.
    Nothing works.
    Please help.

    If ran Disk Utility and only used +Repair Disk Permissions+, you may want to startup from another disk and run +Repair Disk+ (to look for and repair disk directory errors). If you don't have another bootable disk, you can startup from the Mac OS X (current version) installation disk. Insert disk and startup with the C key held down to start from the optical drive. When you get to the first Installer screen (after language selection), go up to the Utilities menu and select Disk Utility, then run +Repair Disk+ on your normal startup drive.
    Also, do you have sufficient free space on your hard drive? While you were move those 1000s of photo file around, did your startup drive ever get to less than 10GB free?

  • All of my iMovie projects disappeared after loading Mavericks.  The updated version of iMovie shows no files and cannot find any files to Import.  My machine, a beefed up Mac Mini, has slowed to a crawl.  Suggestions?

    My Mac Mini takes forever to load programs now.  After loading Mavericks it seems like I am back in PC World again!!  My biggest concern is ALL of my iMovie projects seem to have vanished.  I do not have sound on some web sites like YouTube.  I lost the option to mirror from the Mac to the TV but I was able to solve that.  Any suggestions on recovery of files?  I do not use TimeMachine and did not back up anything before the Mavericks installation.  I know...that's bad.  Any suggestions would be greatly appreciated.
    Here is what I am running:
      Model Name:          Mac mini
      Model Identifier:          Macmini4,1
      Processor Name:          Intel Core 2 Duo
      Processor Speed:          2.66 GHz
      Number of Processors:          1
      Total Number of Cores:          2
      L2 Cache:          3 MB
      Memory:          8 GB
      Bus Speed:          1.07 GHz
      Boot ROM Version:          MM41.0042.B03
      SMC Version (system):          1.65f2
      Serial Number (system):          C07DV15PDD6L
      Hardware UUID:          132F7FB8-1556-55D3-BF33-C1A5EFC69D12

    My Mac Mini takes forever to load programs now.  After loading Mavericks it seems like I am back in PC World again!!  My biggest concern is ALL of my iMovie projects seem to have vanished.  I do not have sound on some web sites like YouTube.  I lost the option to mirror from the Mac to the TV but I was able to solve that.  Any suggestions on recovery of files?  I do not use TimeMachine and did not back up anything before the Mavericks installation.  I know...that's bad.  Any suggestions would be greatly appreciated.
    Here is what I am running:
      Model Name:          Mac mini
      Model Identifier:          Macmini4,1
      Processor Name:          Intel Core 2 Duo
      Processor Speed:          2.66 GHz
      Number of Processors:          1
      Total Number of Cores:          2
      L2 Cache:          3 MB
      Memory:          8 GB
      Bus Speed:          1.07 GHz
      Boot ROM Version:          MM41.0042.B03
      SMC Version (system):          1.65f2
      Serial Number (system):          C07DV15PDD6L
      Hardware UUID:          132F7FB8-1556-55D3-BF33-C1A5EFC69D12

  • I have an Apple MacBook Pro and when surfing the web my computer will slow to a crawl and I will have a multi-colored spinning wheel visible until my latest request is handled.  What is causing this and is there a way to prevent this from occuring

    I have a MacBook Pro.  When surfing the web it will eventually slow to a crawl.  When this occurs, there will be a small multi-colored wheel spinning until my latest command is handled.  What is causing this and is there a way that I can modify or prevent this from happening?  Is there a setting that will prevent this?

    When you next have the problem, note the exact time: hour, minute, second.
    If you have more than one user account, these instructions must be carried out as an administrator.
    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left. If you don't see that menu, select
    View ▹ Show Log List
    from the menu bar.
    Scroll back in the log to the time you noted above. Select any messages timestamped from then until the end of the episode, or until they start to repeat. Copy them to the Clipboard (command-C). Paste into a reply to this message (command-V).
    When posting a log extract, be selective. In most cases, a few dozen lines are more than enough.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Important: Some private information, such as your name, may appear in the log. Anonymize before posting.

  • MacBook Pro Slowed to a CRAWL after less then a week....

    Hello All,
    I'm new to Mac. I just got a MacBook Pro last week.
    Well, for some reason it has slowed to a crawl....
    The dock show/hide and magnify isn't working either. Neither is the "active screen corners" feature on the dashboard. In addition to that, the system does not highlight menu items and show the submenus.
    Also when starting applications they take forever to load.
    It's acting like it has no ram even though it has 2 gigs and a 2.16 processor. The ram is showing up in the "About this MAC" screen though.
    I've tried turning off all startup login items. Nothing is being loaded into memory. I've also tried running the disk utility.... still having the same problems.
    I'm new to Mac so perhaps I'm missing something completely obvious.
    What would you all suggest is going on, and what do I need to do to fix it?
    Thank you
    MacBook Pro   Mac OS X (10.4.6)  

    James:
    I suspect Wizard's problems resulted from too many widgets and possibly some not quite compatible software. Slowdown can result from lack of free physical RAM, so that Virtual Memory swap files on disk have to be frequently accessed. I don't think he mentioned how much RAM is installed. His symptoms go beyond mere slowdown, however, so I would suspect other software problems as well.
    Cutting way back on widgets together with restarting can work wonders sometimes.
    OS X handles file fragmentation pretty well, for file sizes up to about 20 MB. It's probably not necessary to run defrag on a Mac unless one wants to clean up a large space for video work. I don't have defrag software. I'd rather wipe the drive (maybe once a year) and reload from a backup drive. I do swear by DiskWarrior, however, and hope they release an Intel version soon. DW does a great job of correcting disk and file errors and optimizes the directory.
    Apple has been packing a lot of changes into OS incremental upgrades. To interpret that as comparable to Windows Service Packs probably isn't very accurate. Yes, some bugs are fixed but there have been steady upgrades of core features. Security fixes are often included as well. But I've experienced no stability problems whatever.
    Some users recommend using Combo updates instead of using Software Update. I've always used Software Update and have never had a problem. The important thing, either way, is to make certain the operating system and disk directory are 'clean' and trouble-free before and after the update.
    I've never had to do an OS reinstall except once, when I was tinkering with System files and slipped up. Routine preventive maintenance works.

  • Crawler Help: Object reference not set to an instance of an object

    I'm trying to write a custom crawler and having some difficulties.  I'm getting the document information from a database.  I'm trying to have the ClickThroughURL be a web URL and the IndexingURL be a UNC path to the file on a back-end file share.  Also, I'm not using DocFetch.  The problem I'm having is that when the crawler runs I get the following error for every card:
    &#034;4/19/05 13:43:30- (940) Aborted Card creation for document: TestDoc1.  Import error: IDispatch error #19876 (0x80044fa4): [Error Importing Card.
    Error writing Indexing File.
    SOAP fault: faultcode='soap:Server' faultstring='Server was unable to process request. --> Object reference not set to an instance of an object.']&#034;
    Has anyone seen this before?  Any help you can provide would be greatly appreciated.  I have included the code from my document.vb in case that helps.
    Thanks,
    Jerry
    DOCUMENT.VB
    Imports System
    Imports Plumtree.Remote.Util
    Imports Plumtree.Remote.Crawler
    Imports System.Resources
    Imports System.Globalization
    Imports System.Threading
    Imports System.IO
    Imports System.Data.SqlClient
    Imports System.Text
    Namespace Plumtree.Remote.CWS.MoFoDocsOpen
        Public Class Document
            Implements IDocument
            Private m_logger As ICrawlerLog
            Private DocumentLocation As String
            Private d_DocumentNumber As Integer
            Private d_Library As String
            Private d_Name As String
            Private d_Author As String
            Private d_AuthorID As String
            Private d_Category As String
            Private d_ClientName As String
            Private d_ClientNumber As String
            Private d_DateCreated As DateTime
            Private d_DocumentName As String
            Private d_DocumentType As String
            Private d_EnteredBy As String
            Private d_EnteredByID As String
            Private d_FolderID As String
            Private d_KEFlag As String
            Private d_LastEdit As DateTime
            Private d_LastEditBy As String
            Private d_LastEditByID As String
            Private d_Maintainer As String
            Private d_MaintainerID As String
            Private d_MatterName As String
            Private d_MatterNumber As String
            Private d_Practice As String
            Private d_Description As String
            Private d_Version As Integer
            Private d_Path As String
            Private d_FileName As String
            Public Sub New(ByVal provider As DocumentProvider, ByVal documentLocation As String, ByVal signature As String)
                Dim location() As String = DocumentLocation.Split(&#034;||&#034;)
                Me.DocumentLocation = DocumentLocation
                Me.d_DocumentNumber = location(0)
                Me.d_Library = location(2)
                Dim objConn As New SqlConnection
                Dim objCmd As New SqlCommand
                Dim objRec As SqlDataReader
                objConn.ConnectionString = &#034;Server=sad2525;Database=PortalDocs;Uid=sa;Pwd=;&#034;
                objConn.Open()
                objCmd.CommandText = &#034;SELECT * FROM DocsOpenAggregate WHERE Library = '&#034; & Me.d_Library & &#034;' AND DocumentNumber = &#034; & Me.d_DocumentNumber
                objCmd.Connection = objConn
                objRec = objCmd.ExecuteReader()
                Do While objRec.Read() = True
                    Me.d_Name = objRec(&#034;Name&#034;)
                    Me.d_Author = objRec(&#034;Author&#034;)
                    Me.d_AuthorID = objRec(&#034;AuthorID&#034;)
                    Me.d_Category = objRec(&#034;Category&#034;)
                    Me.d_ClientName = objRec(&#034;ClientName&#034;)
                    Me.d_ClientNumber = objRec(&#034;ClientNumber&#034;)
                    Me.d_DateCreated = objRec(&#034;DateCreated&#034;)
                    Me.d_DocumentName = objRec(&#034;DocumentName&#034;)
                    Me.d_DocumentType = objRec(&#034;DocumentType&#034;)
                    Me.d_EnteredBy = objRec(&#034;EnteredBy&#034;)
                    Me.d_EnteredByID = objRec(&#034;EnteredByID&#034;)
                    Me.d_FolderID = objRec(&#034;FolderID&#034;)
                    Me.d_KEFlag = objRec(&#034;KEFlag&#034;)
                    Me.d_LastEdit = objRec(&#034;LastEdit&#034;)
                    Me.d_LastEditBy = objRec(&#034;LastEditBy&#034;)
                    Me.d_LastEditByID = objRec(&#034;LastEditByID&#034;)
                    Me.d_Maintainer = objRec(&#034;Maintainer&#034;)
                    Me.d_MaintainerID = objRec(&#034;MaintainerID&#034;)
                    Me.d_MatterName = objRec(&#034;MatterName&#034;)
                    Me.d_MatterNumber = objRec(&#034;MatterNumber&#034;)
                    Me.d_Practice = objRec(&#034;Practice&#034;)
                    Me.d_Description = objRec(&#034;Description&#034;)
                    Me.d_Version = objRec(&#034;Version&#034;)
                    Me.d_Path = objRec(&#034;Path&#034;)
                    Me.d_FileName = objRec(&#034;FileName&#034;)
                Loop
                objCmd = Nothing
                If objRec.IsClosed = False Then objRec.Close()
                objRec = Nothing
                If objConn.State <> ConnectionState.Closed Then objConn.Close()
                objConn = Nothing
            End Sub
            'If using DocFetch, this method returns a file path to the document in the backend repository.
            Public Function GetDocument() As String Implements IDocument.GetDocument
                m_logger.Log(&#034;Document.GetDocument called for &#034; & Me.DocumentLocation)
                Return Me.d_Path
            End Function
            'Returns the metadata information about this document.
            Public Function GetMetaData(ByVal aFilter() As String) As DocumentMetaData Implements IDocument.GetMetaData
                m_logger.Log(&#034;Document.GetMetaData called for &#034; & DocumentLocation)
                Dim DOnvp(23) As NamedValue
                DOnvp(0) = New NamedValue(&#034;Author&#034;, Me.d_Author)
                DOnvp(1) = New NamedValue(&#034;AuthorID&#034;, Me.d_AuthorID)
                DOnvp(2) = New NamedValue(&#034;Category&#034;, Me.d_Category)
                DOnvp(3) = New NamedValue(&#034;ClientName&#034;, Me.d_ClientName)
                DOnvp(4) = New NamedValue(&#034;ClientNumber&#034;, Me.d_ClientNumber)
                DOnvp(5) = New NamedValue(&#034;DateCreated&#034;, Me.d_DateCreated)
                DOnvp(6) = New NamedValue(&#034;DocumentName&#034;, Me.d_DocumentName)
                DOnvp(7) = New NamedValue(&#034;DocumentType&#034;, Me.d_DocumentType)
                DOnvp(8) = New NamedValue(&#034;EnteredBy&#034;, Me.d_EnteredBy)
                DOnvp(9) = New NamedValue(&#034;EnteredByID&#034;, Me.d_EnteredByID)
                DOnvp(10) = New NamedValue(&#034;FolderID&#034;, Me.d_FolderID)
                DOnvp(11) = New NamedValue(&#034;KEFlag&#034;, Me.d_KEFlag)
                DOnvp(12) = New NamedValue(&#034;LastEdit&#034;, Me.d_LastEdit)
                DOnvp(13) = New NamedValue(&#034;LastEditBy&#034;, Me.d_LastEditBy)
                DOnvp(14) = New NamedValue(&#034;LastEditByID&#034;, Me.d_LastEditByID)
                DOnvp(15) = New NamedValue(&#034;Maintainer&#034;, Me.d_Maintainer)
                DOnvp(16) = New NamedValue(&#034;MaintainerID&#034;, Me.d_MaintainerID)
                DOnvp(17) = New NamedValue(&#034;MatterName&#034;, Me.d_MatterName)
                DOnvp(18) = New NamedValue(&#034;MatterNumber&#034;, Me.d_MatterNumber)
                DOnvp(19) = New NamedValue(&#034;Practice&#034;, Me.d_Practice)
                DOnvp(20) = New NamedValue(&#034;Description&#034;, Me.d_Description)
                DOnvp(21) = New NamedValue(&#034;Version&#034;, Me.d_Version)
                DOnvp(22) = New NamedValue(&#034;Path&#034;, Me.d_Path)
                DOnvp(23) = New NamedValue(&#034;FileName&#034;, Me.d_FileName)
                Dim metaData As New DocumentMetaData(DOnvp)
                Dim strExt As String = Right(Me.d_FileName, Len(Me.d_FileName) - InStrRev(Me.d_FileName, &#034;.&#034;))
                Select Case LCase(strExt)
                    Case &#034;xml&#034;
                        metaData.ContentType = &#034;text/xml&#034;
                        metaData.ImageUUID = &#034;{F8F6B82F-53C6-11D2-88B7-006008168DE5}&#034;
                    Case &#034;vsd&#034;
                        metaData.ContentType = &#034;application/vnd.visio&#034;
                        metaData.ImageUUID = &#034;{2CEEC472-7CF0-11d3-BB3A-00105ACE365C}&#034;
                    Case &#034;mpp&#034;
                        metaData.ContentType = &#034;application/vnd.ms-project&#034;
                        metaData.ImageUUID = &#034;{8D6D9F50-D512-11d3-8DB0-00C04FF44474}&#034;
                    Case &#034;pdf&#034;
                        metaData.ContentType = &#034;application/pdf&#034;
                        metaData.ImageUUID = &#034;{64FED895-D031-11D2-8909-006008168DE5}&#034;
                    Case &#034;doc&#034;
                        metaData.ContentType = &#034;application/msword&#034;
                        metaData.ImageUUID = &#034;{0C35DD71-6453-11D2-88C3-006008168DE5}&#034;
                    Case &#034;dot&#034;
                        metaData.ContentType = &#034;application/msword&#034;
                        metaData.ImageUUID = &#034;{0C35DD71-6453-11D2-88C3-006008168DE5}&#034;
                    Case &#034;rtf&#034;
                        metaData.ContentType = &#034;text/richtext&#034;
                        metaData.ImageUUID = &#034;{F8F6B82F-53C6-11D2-88B7-006008168DE5}&#034;
                    Case &#034;xls&#034;
                        metaData.ContentType = &#034;application/vnd.ms-excel&#034;
                        metaData.ImageUUID = &#034;{0C35DD72-6453-11D2-88C3-006008168DE5}&#034;
                    Case &#034;xlt&#034;
                        metaData.ContentType = &#034;application/vnd.ms-excel&#034;
                        metaData.ImageUUID = &#034;{0C35DD72-6453-11D2-88C3-006008168DE5}&#034;
                    Case &#034;pps&#034;
                        metaData.ContentType = &#034;application/vnd.ms-powerpoint&#034;
                        metaData.ImageUUID = &#034;{0C35DD73-6453-11D2-88C3-006008168DE5}&#034;
                    Case &#034;ppt&#034;
                        metaData.ContentType = &#034;application/vnd.ms-powerpoint&#034;
                        metaData.ImageUUID = &#034;{0C35DD73-6453-11D2-88C3-006008168DE5}&#034;
                    Case &#034;htm&#034;
                        metaData.ContentType = &#034;text/html&#034;
                        metaData.ImageUUID = &#034;{D2E2D5E0-84C9-11D2-A0C5-0060979C42D8}&#034;
                    Case &#034;html&#034;
                        metaData.ContentType = &#034;text/html&#034;
                        metaData.ImageUUID = &#034;{D2E2D5E0-84C9-11D2-A0C5-0060979C42D8}&#034;
                    Case &#034;asp&#034;
                        metaData.ContentType = &#034;text/plain&#034;
                        metaData.ImageUUID = &#034;{F8F6B82F-53C6-11D2-88B7-006008168DE5}&#034;
                    Case &#034;idq&#034;
                        metaData.ContentType = &#034;text/plain&#034;
                        metaData.ImageUUID = &#034;{F8F6B82F-53C6-11D2-88B7-006008168DE5}&#034;
                    Case &#034;txt&#034;
                        metaData.ContentType = &#034;text/plain&#034;
                        metaData.ImageUUID = &#034;{F8F6B82F-53C6-11D2-88B7-006008168DE5}&#034;
                    Case &#034;log&#034;
                        metaData.ContentType = &#034;text/plain&#034;
                        metaData.ImageUUID = &#034;{F8F6B82F-53C6-11D2-88B7-006008168DE5}&#034;
                    Case &#034;sql&#034;
                        metaData.ContentType = &#034;text/plain&#034;
                        metaData.ImageUUID = &#034;{F8F6B82F-53C6-11D2-88B7-006008168DE5}&#034;
                    Case Else
                        metaData.ContentType = &#034;application/octet-stream&#034;
                        metaData.ImageUUID = &#034;{F8F6B82F-53C6-11D2-88B7-006008168DE5}&#034;
                End Select
                metaData.Name = Me.d_Name
                metaData.Description = Me.d_Description
                metaData.FileName = Me.d_FileName ' This is a file name - for example &#034;2jd005_.DOC&#034;
                metaData.IndexingURL = Me.d_Path ' This is a file path - for example &#034;\\fileserver01\docsd$\SF01\DOCS\MLS1\NONE\2jd005_.DOC&#034;
                metaData.ClickThroughURL = &#034;http://mofoweb/docsopen.asp?Unique=&#034; & HttpUtility.HtmlEncode(Me.DocumentLocation)
                metaData.UseDocFetch = False
                Return metaData
            End Function
            'Returns the signature or last-modified-date of this document that indicates to the portal whether the document needs refreshing.
            Public Function GetDocumentSignature() As String Implements IDocument.GetDocumentSignature
                Dim SigString As New StringBuilder
                Dim SigEncode As String
                SigString.Append(Me.d_DocumentNumber & &#034;||&#034;)
                SigString.Append(Me.d_Library & &#034;||&#034;)
                SigString.Append(Me.d_Name & &#034;||&#034;)
                SigString.Append(Me.d_Author & &#034;||&#034;)
                SigString.Append(Me.d_AuthorID & &#034;||&#034;)
                SigString.Append(Me.d_Category & &#034;||&#034;)
                SigString.Append(Me.d_ClientName & &#034;||&#034;)
                SigString.Append(Me.d_ClientNumber & &#034;||&#034;)
                SigString.Append(Me.d_DateCreated & &#034;||&#034;)
                SigString.Append(Me.d_DocumentName & &#034;||&#034;)
                SigString.Append(Me.d_DocumentType & &#034;||&#034;)
                SigString.Append(Me.d_EnteredBy & &#034;||&#034;)
                SigString.Append(Me.d_EnteredByID & &#034;||&#034;)
                SigString.Append(Me.d_FolderID & &#034;||&#034;)
                SigString.Append(Me.d_KEFlag & &#034;||&#034;)
                SigString.Append(Me.d_LastEdit & &#034;||&#034;)
                SigString.Append(Me.d_LastEditBy & &#034;||&#034;)
                SigString.Append(Me.d_LastEditByID & &#034;||&#034;)
                SigString.Append(Me.d_Maintainer & &#034;||&#034;)
                SigString.Append(Me.d_MaintainerID & &#034;||&#034;)
                SigString.Append(Me.d_MatterName & &#034;||&#034;)
                SigString.Append(Me.d_MatterNumber & &#034;||&#034;)
                SigString.Append(Me.d_Practice & &#034;||&#034;)
                SigString.Append(Me.d_Description & &#034;||&#034;)
                SigString.Append(Me.d_Version & &#034;||&#034;)
                SigString.Append(Me.d_Path & &#034;||&#034;)
                SigString.Append(Me.d_FileName & &#034;||&#034;)
                Dim encoding As New UTF8Encoding
                Dim byteArray As Byte() = encoding.GetBytes(SigString.ToString())
                SigEncode = System.Convert.ToBase64String(byteArray, 0, byteArray.Length)
                Return SigEncode
            End Function
            'Returns an array of the users with access to this document.
            Public Function GetUsers() As ACLEntry() Implements IDocument.GetUsers
                'no acl info retrieved
                Dim aclArray(-1) As ACLEntry
                Return aclArray
            End Function
            'Returns an array of the groups with access to this document.
            Public Function GetGroups() As ACLEntry() Implements IDocument.GetGroups
                'no acl info retrieved
                Dim aclArray(-1) As ACLEntry
                Return aclArray
            End Function
        End Class
    End Namespace

    1. I don't think you can just set the index url to a unc path.
    2. Try creating an index aspx page. In your MetaData.IndexURL set it to the index aspx page, and include query string params for the encoded unc path as well as the content type.
    3. In the index servlet, get the content type and path from the query string
    4. Get the filename from the file path
    5. Set the headers for content-type and Content-Disposition, e.g.
    Response.ContentType="application/msword"
    Response.AddHeader("Content-Disposition", "inline; filename'" + filename)
    6. Stream out the file:
    FileStream fs = new FileStream(path, FileMode.Open)
    byte[] buffer = new byte[40000]
    int result
    System.IO.Stream output = Response.OutputStream
    do
    result = fs.Read(buffer, 0, 40000)
    output.Write(buffer, 0, result)
    while (result == 40000)
    put the above in a try-catch, and then delete the temp file in the finally block.
    If this does not help, set a breakpoint in the code to find the error. Also use Log4Net to log any errors.

  • My Macbook Pro is at a "death crawl" pace, and I'm a hundred miles away from an Apple Store!

    Please help! All of a sudden, my Macbook Pro has slowed to a death crawl.  I downloaded and installed OS X 10.8.3 but it was running pefectly fine for about 3 weeks.  But after I downloaded and tried to install the latest safari update (my computer ran out of bettery and shut down before I could finish installing) my computer has slowed down to an excrutiating pace.  It starts up very very slow and I constantly get the spinning beach ball, even to close an application or type.  I tried starting in safe mode, clearing my cache, running system repair, checking my RAM and CPU usage, and even tried re-installing.  Might it be Mountain Lion? I also was trying to download a movie from a sketchy website at the time my computer took a **** on me, might it be that? Anyways, please help because I am hundreds of miles away from civilization and the nearest Apple Store. Thanks in advance!

    First, back up all data immediately, as your boot drive might be failing.
    There are a few other possible causes of generalized slow performance that you can rule out easily.
    Reset the System Management Controller.
    If you have many image or video files on the Desktop with preview icons, move them to another folder.
    If applicable, uncheck all boxes in the iCloud preference pane.
    Disconnect all non-essential wired peripherals and remove aftermarket expansion cards, if any.
    Check your keychains in Keychain Access for excessively duplicated items.
    Boot into Recovery mode, launch Disk Utility, and run Repair Disk.
    Otherwise, take the steps below when you notice the problem.
    Step 1
    Launch the Activity Monitor application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Activity Monitor in the icon grid.
    Select the CPU tab of the Activity Monitor window.
    Select All Processes from the menu in the toolbar, if not already selected.
    Click the heading of the % CPU column in the process table to sort the entries by CPU usage. You may have to click it twice to get the highest value at the top. What is it, and what is the process? Also post the values for % User, % System, and % Idle at the bottom of the window.
    Select the System Memory tab. What values are shown in the bottom part of the window for Page outs and Swap used?
    Next, select the Disk Activity tab. Post the approximate values shown for Reads in/sec and Writes out/sec (not Reads in and Writes out.)
    Step 2
    If you have more than one user account, you must be logged in as an administrator to carry out this step.
    Launch the Console application in the same way you launched Activity Monitor. Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left. If you don't see that menu, select
    View ▹ Show Log List
    from the menu bar.
    Select the 50 or so most recent entries in the log. Copy them to the Clipboard (command-C). Paste into a reply to this message (command-V). You're looking for entries at the end of the log, not at the beginning.
    When posting a log extract, be selective. Don't post more than is requested.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Important: Some personal information, such as your name, may appear in the log. Anonymize before posting. That should be easy to do if your extract is not too long.

  • Problem crawling filenames with national characters

    Hi
    I have a big problem with filenames containing national (danish) characters.
    The documents gets an entry in in wk$url but have error code 404 (Not found).
    I'm running Oracle RDBMS 9.2.0.1 on Redhat Advanced Server 2.1. The
    filesystem is mounted on the oracle server using NFS.
    I configure the Ultrasearch to crawl the specific directory containing
    several files, two of which contains national characters in their
    filenames. (ls -l)
    <..>
    -rw-rw-r-- 1 user group 13 Oct 4 13:36 crawlertest_linux_2_fxeFXE.txt
    -rw-rw-r-- 1 user group 19968 Oct 4 13:36 crawlertest_windows_fxeFXE.doc
    <..>
    (Since the preview function is not working in my Mozilla browser, I'm
    unable to tell whether or not the national characters will display
    properly in this post. But they represent lower and upper cases of the
    three special danish characters.)
    In the crawler log the following entries are added:
    <..>
    file://localhost/<DIR_PATH>/crawlertest_linux_2_B|C?C%C?C?.txt
    file://localhost/<DIR_PATH>/crawlertest_linux_2_B|C?C%C?C?.txt
    Processing file://localhost/<DIR_PATH>/crawlertest_linux_2_%e6%f8%e5%c6%d8%c5.txt
    WKG-30008: file://localhost/<DIR_PATH>/crawlertest_linux_2_%e6%f8%e5%c6%d8%c5.txt: Not found
    <..>
    file://localhost/<DIR_PATH>/crawlertest_windows_B|C?C%C?C?.doc
    file://localhost/<DIR_PATH>/crawlertest_windows_B|C?C%C?C?.doc
    Processing file://localhost/<DIR_PATH>/crawlertest_windows_%e6%f8%e5%c6%d8%c5.doc
    WKG-30008:
    file://localhost/<DIR_PATH>/crawlertest_windows_%e6%f8%e5%c6%d8%c5.doc:
    Not found
    <..>
    The 'file://' entries looks somewhat UTF encoded to me (some chars are
    missing because they are not printable) and the others looks URL
    encoded.
    All other files in the directory seems to process just fine!.
    In the wk$url table the following entries are added:
    (select status url from wk$url where url like '%crawlertest%'; )
    404 file://localhost/<DIR_PATH>/crawlertest_linux_2_%e6%f8%e5%c6%d8%c5.txt
    404 file://localhost/<DIR_PATH>/crawlertest_windows_%e6%f8%e5%c6%d8%c5.doc
    Just for testing purpose a
    SELECT utl_url.unescape('%e6%f8%e5%c6%d8%c5') from dual;
    Actually produce the expected resulat : fxeFXE
    To me this indicates that the actual filesystem scanning part of the
    crawler can sees the files, but the processing part of the crawler can
    not open the file for reading and it therefor fails with error 404.
    Since the crawler (to my knowledge is written in Java i did some
    experiments, with the following Java program.
    import java.io.*;
    class filetest {
    public static void main(String args[]) throws Exception {
    try {
    String dirname = "<DIR_PATH>";
    File dir = new File(dirname);
    File[] fs = dir.listFiles();
    for(int idx = 0; idx < fs.length; idx++) {
    if(fs[idx].canRead()) {
    System.out.print("Can Read: ");
    } else {
    System.out.print("Can NOT Read: ");
    System.out.println(fs[idx]);
    } catch(Exception e) {
    e.printStackTrace();
    The performance of this program is very depending on the language
    settings of the current shell (under Linux). If LC_ALL is set to "C"
    (which is a common default) the program can only read files with
    filenames NOT containing national characters (Just as the Ultrasearch
    crawler). If LC_ALL is set to e.g. "en_US", then it is capable of
    reading all the files.
    I therefor tried to set the LC_ALL environment for the oracle user on
    my oracle server (using locale_config, and .bash_profile) but that did
    not seem to fix the problem at hand.
    So (finally) my question is; is this a bug in the Ultrasearch crawler
    or simply a mis configuration of my execution environment. If the
    latter how do i configure my system correctly?
    Yours sincerely
    Martin Dahl Pedersen, Visanti ( mdp at visanti dot com )

    I've posted my problems as a TAR on METALINK a little week ago.
    And it turns out to be a new bug in UltraSearch.
    It is now filed under BUG:2673282
    -- mdp

  • Service Accounts being crawled

    Dear all,
    I have just setup a SP2013 search center.  In the people search, I am able to search out managed service users (e.g. sp_search, which I created to run the search application) are being searched out as a normal users. Of course, they can create a blog
    (since I have My Site Host site collection created) if someone login with it.
    I think I have missed something in Active Directory configuration? How can I mark sp_search as a service account not a Sharepoint or Windows users? Thanks.
    (Sorry that I am not sure I should ask in this forum section.)
    Mark

    Hi Mark,
    Also check if you have imported the service accounts to SharePoint User service profile database from AD due to the Synchronization Connection.
    Please go to user profile service application and click "Configure Synchronization Connection", make sure the connection is connected to your Organizational Unit (which doesn't contain the service accounts) containing required users from AD, then start a
    full synchronization to make sure these service accounts profile don't exist in User profile service db, then start a full search crawl and check results again.
    http://blog.sharedove.com/adisjugo/index.php/2012/07/23/setting-user-profile-synchronization-service-in-sharepoint-2013/
    Thanks,
    Daniel Yang
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected] 
    Daniel Yang
    TechNet Community Support

  • Boris title crawl missing after converting with Compressor

    Not sure where to put this, since it involves two applications, FCP and Compressor.
    I have my credits done as a Boris title crawl (roll). They roll up just like credits in the average movie. They're quite long, spanning almost 4 minutes. Some videos are also included in the open space during this title crawl.
    When I convert the files for use with DVD Studio, I used the DVD Best quality 90 minutes presets. It performed that conversion without a hitch.
    Looking at the video, everything looks great (it's interlaced, of course). No apparent problems, until the end, where the credits should be scrolling: there's nothing. The videos show up, but only one small section of the credits actually show, and it's cut-off.
    I checked the credits again in FCP, and they all fit 100% within the inner title-safe overlays. I've never had any trouble with these rolling credits, except for this problem now.
    Any ideas what could cause them to almost completely disappear? most of the time the screen is pure black with absolutely NO text whatsoever. Only three lines show up, and they are cut off on both sides (they extend beyond the view of the screen on both the right and the left).
    Many thanks to whoever can assist me.

    So you're saying I should just export one as a test? Or, what I was thinking is, export the title crawl as a self-contained movie, then import it back into FCP and then send it to Compressor; can't mess it up like it does if it's a strait-up video, right?
    Still can't figure out why Compressor would mess it up like that though. Looks perfect when I export the whole movie as a self-contained movie, which I already did twice. I then burned a sample DVD Using iDVD as a test to see how my video looked on the TV screen. Everything was perfect, except a small amount of flickering in some text (but it wasn't enough to bother me).
    Yet, sending it to Compressor completely destroys the title crawl, leaving the intermixed videos untouched. Just plain weird...

  • Crawler issues

    I installed Plumtree content service for windows files (latest SP1 version).
    When imported packages and set the crawler to run on a windows folder, crawler runs with error "unrecoverable error when bulk importing cards".
    But it imports cards into the system. I can see the cards/properties/files in edit mode but when I go into browser mode, I do not see any files. crawler sis set to approve files automatically, I manually approved the files.
    Not sure where the problem is ?

    Hi!
    It seems as though your Search service is not indexing those files, but they exist in the portal DB.
    Is that "unrecoverable error when bulk importing cards" error the only message you receive in the job log?
    Thanks

  • Remote content crawler on a file directory in a different subnet

    I'm trying to crawl a file directory that is on our company network but in a different subnet. It seems to be set up correctly, because I have managed to import most of the documents to the knowledge directory. However, when running the job a few times, sometimes it succeeds and sometimes it fails, without consistency. The main thing I notice is that it doesn't import the larger files (>5 MB), but our maximum allowed is 100 MB. Even when the job runs "successfully" there is a message in the job log:
    Feb 21, 2006 12:08:14 PM- com.plumtree.openfoundation.util.XPNullPointerException: Error in function PTDataSource.ImportDocumentEx (vDocumentLocationBagAsXML == <?xml version="1.0" encoding="ucs-2"?><PTBAG V="1.1" xml:space="preserve"><S N="PTC_DOC_ID">s2dC33967209AEE4710C5ED073C04B3EDCF_1.pdf</S><I N="PTC_DTM_SECT">1000</I><I N="PTC_PBAGFORMAT">2000</I><S N="PTC_UNIQUE">\\10.105.1.33\digitaldocs\s2dC33967209AEE4710C5ED073C04B3EDCF_1.pdf</S><S N="PTC_CDLANG"></S><S N="PTC_FOLDER_NAME">s2dC33967209AEE4710C5ED073C04B3EDCF_1.pdf</S></PTBAG>, pDocumentType == com.plumtree.server.impl.directory.PTDocumentType@285d14, pCard == com.plumtree.server.impl.directory.PTCard@1f6ef01, bSummarize == false, pProvider == [email protected]4)ImportDocumentExfailed for document "s2dC33967209AEE4710C5ED073C04B3EDCF_1.pdf"
    When the job fails, there is a different message:
    *** Job Operation #1 failed: Crawl has timed out (exception java.lang.Exception: Too many empty batches.)(282610)
    I tried increasing the time out periods for the crawler web service and the crawler job. That didn't seem to work. Any suggestions?

    Hi Dave,
    Did you fix this issue? I'm having the same error.
    Thanks!

Maybe you are looking for

  • Regardig error while updating the database table

    hi experts,    i m trying to update the database table from the values containig in my internal table  ,,,but the system is giving this error plz help me:::: The type of the database table and work area (or internal table) "ITAB_UPDATE" are not Unico

  • How do I get my widgets icon back on the dock?

    Help. All of a sudden my icon for widgets (speedometer) disappeared from the dock. I can still get to it by pressing F12 but I can't find a way to get it back on the dock. Does anyone have a solution?

  • Server does not support RFC 5746, see CVE-2009-3555

    When accessing option: "Personal Banking" at "www.onlinesbi.com" Firefox 3.6.8 crashes. ERROR CONSOLE displays "www.onlinesbi.com : server does not support RFC 5746, see CVE-2009-3555". OS: Windows Vista Home Premium SP-2.

  • How do I get the pictures of CDs to show up?

    The IPod nano says that pictures of the music you are playing are supposed to show up (pictures of what the CD cover looks like). Does anyone know how I can get them to show up?

  • Code not working like I want

    I have two date fields, "StartDate1" and "EndDate1". When I make the EndDate1 value less than StartDate1, the "button" remains VISIBLE, but INACTIVE, and I can move on to the next field. Why does the button remain visible, why is the button inactive,