url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
https://support.hapara.com/hc/en-us/articles/202738133-Sub-Domain-and-Multi-Domain-Setups-for-Teacher-Dashboard | code | If your school or district's teacher and student accounts are in separate Google Apps domains, there are a few extra steps to set up Teacher Dashboard.
There are three ways for teachers and students to be in separate domains:
A sub-domain is a Google Apps domain that resides within another domain. All administration is carried out through the parent domain, but email addresses are separate, and some settings can differ between parent and child domains. Student sub-domains are often a "more specific" version of the teacher domain. For example, teachers may be located in the domain myschool.org, while students are in the sub-domain students.myschool.org.
Teachers and students can be in two completely different Google Apps domains. This includes scenarios in which teachers and students are in different sub-domains but share the same parent domain (e.g., teachers.myschool.org and students.myschool.org, with parent domain myschool.org).
When a domain alias is used, students may appear to be in a different domain than teachers even though they are in the same domain. This option allows users receive emails sent to addresses in a different domain, and is usually done to preserve existing email address structures.
For example, a school using the domain myschool.org might add a domain alias named students.myschool.org so that students can receive emails sent to addresses like [email protected], even though their user accounts are in the domain myschool.org.
With this type of configuration, Teacher Dashboard should be treated as a single domain setup, pointing to the primary domain that contains student accounts, not the alias domain.
How to distinguish sub-domains, multi-domains, and domain aliases
Administrators who only have administrator access to the teacher domain can determine their configuration by navigating to Domain Settings > Domain Names in the Google Apps Admin console, and looking for one of the following:
- If the student domain is listed as a domain, it is a sub-domain setup.
- If the student domain is listed as a domain alias, then it is a domain alias setup.
- If the student domain is not listed, then it is a multi-domain setup.
Currently, the following Teacher Dashboard features are not supported in a sub-domain or multi-domain setup:
- Class calendars
- Email copy function
Pre-requisite Configuration Steps
requires the ability to access a variety of protected Google Apps functions in order to be configured in your domain. In a sub-domain or multi-domain environment these permissions must be explicitly granted in your Google Apps domain. Please take the following steps to do so:
Check that both domains (teacher and student) have the OAuth key enabled by following the below in each domain:Note: If you are using a sub-domain setup, this step is not necessary, as sub-domains inherit this setting from their parent domain. In this case, you only need to check the following in your primary domain's Admin console.
- Log into the Google Apps Admin console as a super administrator.
- From the main dashboard, navigate to "Security" > "Advanced settings" > "Authentication" and click on "Manage OAuth domain key"
- Ensure that both check boxes on this page are enabled (and remember to "Save changes" if changes are made):
- "Enable this consumer key"
- "Allow access to all APIs"
Primary and Secondary domainsUsing the guidelines below, identify which domain will be used as the primary domain, and which will therefore be used as the secondary domain. The primary domain is the domain that we will copy the OAuth key and consumer secret key from, when setting up Teacher Dashboard.
- In sub-domain configurations, the primary domain must be the parent domain. The secondary domain will be the sub-domain.
- In multi-domain configurations either domain can be used as the primary domain.
Grant your secondary domain access to the primary domain's OAuth key
- Log into the primary domain's Google Apps Admin console as a super administrator. From the main dashboard, navigate to "Security" > "Advanced settings" > "Authentication" and click on "Manage OAuth Client access".
- Populate the "Client Name" text field with the name of the secondary domain.
- Copy and paste the value from the box below into the "API Scopes" text field. Please make sure that this value is copied as a single line, without any breaks. https://docs.google.com/feeds/,https://docs.googleusercontent.com/,https://spreadsheets.google.com/feeds/,https://www.googleapis.com/auth/calendar,https://mail.google.com/,https://sites.google.com/feeds/,https://www.googleapis.com/auth/tasks,https://www.blogger.com/feeds/,https://picasaweb.google.com/data/,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/admin.directory.group,https://www.googleapis.com/auth/admin.directory.orgunit,https://www.googleapis.com/auth/admin.directory.user,https://www.googleapis.com/auth/plus.me
Updated 27 Jan 2014
- Click "Authorize" - IMPORTANT: At this point, you may receive an error message saying "This client name has not been registered with Google yet." Please disregard this message.
- On the same page, replace the value in the "Client Name" text field with the name of your primary domain. Please note that your primary domain may appear to have access to these APIs already, however this step is still required.
- Populate the "API Scopes" text field with the same value from the box above (see step iii).
- Click "Authorize". | s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370499280.44/warc/CC-MAIN-20200331003537-20200331033537-00330.warc.gz | CC-MAIN-2020-16 | 5,462 | 35 |
https://lists.cairographics.org/archives/cairo/2010-February/019367.html | code | [cairo] [RFC] Color space API (partial proposal)
spitzak at gmail.com
Mon Feb 22 13:53:48 PST 2010
I do not think cairo_color_t is necessary, just the cairo_color_space_t.
I would prefer to set colors with an api like this:
cairo_set_source_color(cairo_t*, cairo_color_space_t*, void* data)
The reason is that the "data" is quite likely already in a memory
structure with the correct format (such as an array of float).
This also makes the emulation of the existing color api trivial without
allocating any temporary objects.
I also feel it may be possible to unify images and colors with this,
something I consider fairly important. A "color" is just a 1x1 image. We
could make cairo_color_space_t be a full description of the pixels in an
More information about the cairo | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529179.46/warc/CC-MAIN-20210122082356-20210122112356-00042.warc.gz | CC-MAIN-2021-04 | 773 | 14 |
https://www.protechtraining.com/blog/post/performance-checklist-for-the-mobile-web-631 | code | Have you been misusing the term web performance? Colt McAnlis, a developer advocate at Google, says that web performance is more than just how fast a page loads. It is also the experience a user has while using your app.
For this talk, Colt supplies a “programmer’s checklist” for building HTML5 apps capable of making smooth transitions from desktop to mobile. If you work in an environment where the performance of your application matters to your companies bottom line, or if you just want to share some laughs with Colt while learning a thing or two about web performance, definitely check out this fun talk from HTML5DevConf.
Ready to learn more in HTML5? | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506676.95/warc/CC-MAIN-20230925015430-20230925045430-00680.warc.gz | CC-MAIN-2023-40 | 666 | 3 |
http://www.eclipse.org/forums/index.php/t/489228/ | code | I'm developing an rcp eclipse application and I'm using a custom package explorer to show the files within a project.
I'm having troubles with the way I want to display certain files. I have successfully gotten certain types of files to be children of a parent file, and to show up under it's dropdown menu. However, the original children files are still visible in the explorer (see attached picture).
Any help on this would be greatly appreciated, and I would of course be more than happy to include relevant source code. My package explorer was implemented through the org.eclipse.ui.navigator.navigatorContent extension point and a custom navigator content provider. | s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535924501.17/warc/CC-MAIN-20140909013156-00239-ip-10-180-136-8.ec2.internal.warc.gz | CC-MAIN-2014-35 | 670 | 3 |
https://brynahaynes.com/blog/2018/8/16/how-to-figure-out-what-you-actually-want | code | How to Figure Out What You Actually Want
Now that you've eliminated the words "should" and "can't" from your manifesting vocabulary, it's time to hone in on what you actually want.
(Haven't crossed out those words yet? Watch this video first!)
What you think you want and what you actually want may not be the same thing. Here's why:
Getting the external "stuff" won't get you the feelings you want. You have to give yourself the experience of feeling what you want before the things you're manifesting will provide it for you.
So, think about not just what you want, but why you want it. How do you want to feel when you get the things you want?
The Universe has a cornucopia of amazingness for you to choose from. Anything on that platter could help you create the feelings you want, if you're open to making that internal shift. So think about how you want to feel, and use that as the basis for your manifestation! | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987828425.99/warc/CC-MAIN-20191023015841-20191023043341-00528.warc.gz | CC-MAIN-2019-43 | 918 | 7 |
https://docs.apigee.com/release/notes/156-apigee-sense-release-notes | code | You're viewing Apigee Edge documentation.
Go to the Apigee X documentation. info
New features and enhancements
More detailed view of actions applied to IP addresses
For each client IP address included in analysis, Apigee Sense now provides a way for you to see the actions that could affect requests from the IP address. Details about each IP address now include a list of the actions taken for the IP address. These are listed in precedence order, from first to last.
You'll find IP details by clicking an IP address. For example, in the Apigee Sense console, click the Detection menu, then click Report. In the report page, click List View, then click an IP address.
Protection status page for viewing actions enabled for each client IP address
You can now view a list of client IP addresses with the actions -- Allow, Block, or Flag -- that has been taken for each. You can click an IP in the list to see more detail about protection rules enabled for the IP address.
To reach the Protection Status page in the Apigee Sense console, click the Protection menu, then click Status. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817491.77/warc/CC-MAIN-20240420060257-20240420090257-00657.warc.gz | CC-MAIN-2024-18 | 1,081 | 9 |
https://answers.unrealengine.com/questions/68579/view.html | code | HLSL(.fx) to UE4 Material - porting advice?
I have a great many HLSL.fx shaders that I have written collected and purchased over the last 5 years.
So far, I am aware that use of Custom Expressions are discouraged for performance and portability reasons.
How would one go about porting that PerlinNoise.fx code to UE4 for use as a material effect?
(a) Use UE4 material nodes & functions to replicate the .fx code?
I am not looking for a 'perlin noise shader'. I am purely interested in the possible workflow for
The easiest way is probably to convert your HLSL into a function format (instead of a main entrypoint), paste it into MaterialTemplate.usf. Then use a Custom expression to call your function, and hook up the output however you want into the material system. Any inputs your code needs will have to be function parameters.
answered Jul 14 '14 at 05:07 PM
Follow this question
Once you sign in you will be able to subscribe for any updates here | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998724.57/warc/CC-MAIN-20190618123355-20190618145355-00062.warc.gz | CC-MAIN-2019-26 | 953 | 10 |
http://stackoverflow.com/questions/19449941/how-to-delete-the-one-applications-registry-key-from-the-regedit-using-python-s | code | I'm new to python. I want to delete the key which is in the regedit using python script.
regedit tree view for my application key
HKEY_CURRENT_USER | |_Software | |_Applications | |_Application |_Test1 |_Test2
In this, I want to delete Test1 key using python script.
I've used below script
import _winreg Key_Name=r'Software/Applications/Application/Test1' Key=_winreg.OpenKey(_winreg.HKEY_CURRENT_USER, Key_Name, 0, _winreg.KEY_ALL_ACCESS) _winreg.DeleteKey(key)
Traceback (most recent call last): File "C:\Users\Test\workspace\Test\DeletePreferences.py", line 9, in <module> key=_winreg.OpenKey(_winreg.HKEY_CURRENT_USER, r'Software/Applications/Application/Test1', 0, _winreg.KEY_ALL_ACCESS) WindowsError: [Error 2] The system cannot find the file specified
can anybody suggest solution for this? | s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298684.43/warc/CC-MAIN-20150323172138-00036-ip-10-168-14-71.ec2.internal.warc.gz | CC-MAIN-2015-14 | 799 | 8 |
https://kiddionsmenu.com/how-long-does-it-take-to-learn-c/ | code | C++ is a strong language that many accomplished programmers have used and continue to use. These days, there are a lot of programming languages that are quick to pick up and quickly apply, but it all depends on the objective we want to accomplish. So, How Long Does It Take to Learn C++?
The ability to process and apply logic as quickly as possible is the essence of coding. On the other hand, learning any other programming language and eventually picking up new skills becomes easier if you are proficient in one.
What Is C++?
The first programming language to link programming to a real-world entity is C++, which is also one of the most widely used object-oriented languages taught to students in universities worldwide. The development of apps for multiple platforms and devices also makes use of C++.
It combines better software performance with increased capacity, and it’s widely used to create orderly apps. Additionally, this language can be used on various platforms, is compiled, and has the best compatibility with C programming or any other language.
Why Should Learn C++ Programming?
Since C++ code makes up a large portion of the codebase of many contemporary systems, including web browsers, operating systems, databases, etc., C++ is a very important language in today’s world. Furthermore, because of its speed, C++ is quite useful in parts where performance is crucial.
Big projects suitability
For large-scale projects, the C++ programming language is excellent. Among them are databases, cloud storage systems, graphic design, game development, compilers, and other projects.
Numerous professions require the C++ programming language. Professionals like game developers, software developers, backend developers, and C++ analysts are in high demand for this flexible language.
Programming languages such as C++ are almost widely used. For example, it’s used in software, browsers, and applications. Operating systems are another area in which C++ is heavily used. Almost all operating systems, including Windows, Mac OS, Linux, and others, use C++.
How Long Does It Take to Learn C++?
If you are a novice to programming, you will find learning C++ to be quite challenging. For the novice level, it will require two to three months. It will take you six to twelve months to grasp the intermediate level, and longer than that to become a language master.
Your knowledge of another programming language will be very beneficial. Learning programming languages such as Python and Java will be much easier for you if you already know these languages.
In no more than two to three weeks, you will grasp the fundamentals. If you only concentrate on functional C++, such as input and output, classes, exclude objects, and file operations, learning C++ will take you this long.
How quickly someone picks things up is another factor that affects how long it takes to learn C++. For a quick learner who is already familiar with other programming languages, it could take up to an hour.
Is C++ sufficient for developing games?
Learning C++ programming is a great idea because it’s a high-level language that will introduce you to the fundamentals of object-oriented programming. Windows and console games with the biggest graphics are also made with C++. But knowing C++ is essential for large games in larger gaming companies.
Which is better to learn first? C or C++?
Before learning the additional features that C++ offers, Geeks for Geeks advises learning C. This way, you can master the fundamentals first. There are shortcuts and simpler methods of doing things available with certain C++ features.
What uses does C++ have?
The development of video games frequently uses C++. Because video games are so intricate, a programming language that can keep up with everything that’s happening is needed.
It won’t take you long to learn C++ if you’ve already worked with other programming languages like Python and Java. This will enable you to become proficient in C++ within months. It’s going to be challenging for a total novice. If you are a slow learner, it could take you more than a year to comprehend the advanced level. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817206.28/warc/CC-MAIN-20240418093630-20240418123630-00535.warc.gz | CC-MAIN-2024-18 | 4,153 | 23 |
http://spinrewriter9.com/spin5/learn-all-about-article-rewrite-tool-from-this-politician-find-out-more.html | code | Spin Rewriter 9 rewrites content on paragraph, word and sentence level. It also turns sentences around and makes sure that only appropriate synonyms are used. Aaron added a special ARTICLE GENERATING ALGORITHM to Spin Rewriter 9.0. This algorithm makes sure that the generated articles are even more unique, and even more readable — if that was even possible!
The first thing you see when you login to your WordAI account (you can take advantage of the 3 days free trial) is the welcome page. On it, you will basically be introduced to what WordAI is and what it can do for you, including the latest releases of the software and a link to the Apex Forum, which is a pretty fast growing SEO forum created by the WordAI team.
Some writers research for batches of affordable private brand rights articles and spin them before submitting them. These PLR content articles may not really become the greatest high quality but anything can become enhanced upon by the rewriting. These PLR content articles are utilized credited to the truth that actually though the high quality is mediocre, they nevertheless have assisted attract in visitors.
If an article rewriter tool, an instant article spinner or a paraphrasing tool is what you are looking for then our free article spinner tool will definitely help you in this regard. Are you a student, content writer or a teacher? Then this free online article rewriter is a life saver for you. Why spend hours writing an essay or paper when you can use this free article spinner and get the same results. If you are a content writer, this tool can help you with both writing a unique content and saving time. This best article rewriter online can even be used for your blogs and websites.
Readable And Unique Content – the recommended settings for this one are the same as for the one above with the only difference being the change of the 1st input to “Readable”. This will create more unique versions of your content while still keeping a high level of readability. Generally, this is the best setting as it gets a good balance between human readability and uniqueness of the content generated. | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578716619.97/warc/CC-MAIN-20190425094105-20190425120105-00046.warc.gz | CC-MAIN-2019-18 | 2,142 | 5 |
https://community.theforeman.org/t/foreman-community-demo-65/13421 | code | Every few weeks we host a Community Demo to showcase new & interesting developments from the Foreman community. We encourage participation from any member of the community (although you do need a Google account), so if you’ve been working on something cool, please do come show it off.
This post is a wiki, so if you have something to show, add yourself to this table!
|@cintrix84||5 min||New Katello SRPMS content|
|@aruzicka||3 min||Randomized execution order in REX|
|@kgaikwad||5 min||API and CLI support for GCE compute resource|
|@sseelam2||5 min||Support for System Purpose on Activation Keys|
Sources of inspiration
Here’s a few places to find inspiration, in case you’ve forgotten what you’ve done recently
All Redmine issues closed in the last 21 days
Only my Redmine issues closed in the last 21 days
GitHub PRs labelled “Demo-worthy”
(let me know if you have other good links to go here!)
The YouTube link to watch the demo will be added a week before the show (which will also serve as a reminder to go add yourself to the agenda) | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655027.51/warc/CC-MAIN-20230608135911-20230608165911-00550.warc.gz | CC-MAIN-2023-23 | 1,055 | 13 |
http://www.edugeek.net/forums/scripts/print-9198-fits-compliant-web-based-helpdesk-released-71.html | code | Have re upload the license file as I had done it as the local host and it hadn't of generated it as the FQDN, so if i accessed by typing in the fqdn it wouldnt let me log in/tell me it was unlicensed
Re-Authenticated for you.
Thanks, sorry for spoiling your sunday :(
No problem, HTH :)
ok something else, had problem logging on as the administrator so I deleted the database and re started, went into the setup and when i configure the data base i get this error "Failure to create database structure, the error was: Cannot insert the value NULL into column 'twitterCK', table 'OPSD.dbo.tblsettings'; column does not allow nulls. INSERT fails. The statement has been terminated."
I click it again and it says everything is fine, create the user etc and then i go to logon and it still wont let me logon
Did you create the SQL database with the required permissions as per the installation KB article?
yeah double checked the setting up of the sql database and have deleted and re setup. I think it may of brought it up but it was then followed by the 3 green boxes saying all the dayabase stuff had setup correctly..
Are you onsite or doing this remotely? can we get a remote connection to see whats is happening?
Im onsite... I can setup a remote connection but will have to be through our external suppiler vpn as our LA only allow that
Please check the support request I just raised for you.
Just completed a fresh install which went well - one thing we've had problems with is adding periods in the resource manager. We can add rooms, resources, set up weeks, etc without problem but when we select the "Periods" tab it's empty, no options to add or anything. Any ideas?
EDIT: Looks like the column period_time_header in tblrb_periods wasn't created during the install. Adding it solves the problem.
had the same thing with periods, but adding that column has worked
Thought it strange as we hadn't seen that isaue crop up in testing. | s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009515.14/warc/CC-MAIN-20141125155649-00126-ip-10-235-23-156.ec2.internal.warc.gz | CC-MAIN-2014-49 | 1,939 | 15 |
https://braytonium.com/2016/01/10/we-be-derpin/ | code | A lot can happen in a little over 2 years...
In my last post, I
had proposed an attempt to tackle the FizzBuzz problem. PowerShell was done, PHP was barely started but I never pointed
to it in a subsequent post or finished what I wanted. The project url has
completed and checked solutions for PHP and Node.js. I had mentioned
b. F#, Objective-C, CoffeeScript, C/C++, Go, Dart, and Haskell are the planned languages I've mostly touched in passing or know about.,
as well as C#, Pascal, and Ruby but I may never get to them.
My last post taught me that while I may know of a language, it doesn't mean I'll have a genuine desire to pursue it. It can also easily become difficult to want to pursue development outside of your day job. Staying current, however, is always worth pursuing. Tooling and efficiency around web development seems to have come a very long way.
To keep this post brief, I plan on making more updates as I feel a lot has changed for me in the past 2 years that I'd still love to share. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818105.48/warc/CC-MAIN-20240422082202-20240422112202-00784.warc.gz | CC-MAIN-2024-18 | 1,005 | 9 |
https://www.publichealth.pitt.edu/home/directory/kar-hai-chu | code | Associate Professor, Behavioral and Community Health Sciences
My long-term goal is to develop a program of research focused on preventing tobacco-related cancer mortalities. I have a diverse background in computer science, social network analysis, online social media, and cancer prevention. The focus of my research has been leveraging innovative technologies to study tobacco control. My recent projects include exploring the presence of tobacco companies on social media and analyzing their behavior and strategies in marketing; studying the diffusion of anti-vaccination topics online; interventions for electronic cigarette use by adolescents; modeling new tobacco trends to inform regulatory agencies.
2019 | University of Pittsburgh | MS, Clinical Research
2012 | University of Hawaii | PhD, Communication and Information Sciences
2005 | Columbia University | MS, Computer Science
2001 | Johns Hopkins University | BS, Computer Science
ICRE 2600 / BCHS 2551 | Social Networks and Health | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989814.35/warc/CC-MAIN-20210513142421-20210513172421-00092.warc.gz | CC-MAIN-2021-21 | 993 | 7 |
https://magento.stackexchange.com/questions/138774/category-setup-and-fetching-cms-block-from-custom-design-template | code | This Magento newbie is trying to create a new page layout and customization for level 3 categories. What I've done at this point is use the Custom Design tab to change the Page Layout to 1 column and call a template with the Custom Layout Update:
<reference name="series"> <action method="setTemplate"><template>catalog/category/seriespage.phtml</template></action> </reference>
In my page.xml files I have the block defined as "core/template".
But what I found is that in my template I could not access the category fields with $this - calls to $this->getName() and $this-getDescription() returned null. I finally was able to get them this way:
$category = Mage::getSingleton('category/layer')->getCurrentCategory(); $desc = $category->getDescription(); // The right stuff
However, the category ID is still unknown and not returned from a $category->getCategoryId() call as I think it should be. I see references to $this->getCategoryId() often.
Now I want to fetch the content of a CMS static block I've created, but this too seems to return null. Nothing is output from a statement such as:
Using $this instead of Mage::app() didn't work either of course, and I tried other Mage variations. I thought defining this template in Manage Categories would give it the proper context, but apparently not. What am I missing to get $this to be set to my category context so the typical function calls work? | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100769.54/warc/CC-MAIN-20231208180539-20231208210539-00397.warc.gz | CC-MAIN-2023-50 | 1,401 | 8 |
http://journals.plos.org/ploscompbiol/article/figure?id=10.1371/journal.pcbi.1000774.g003 | code | Spontaneous Quaternary and Tertiary T-R Transitions of Human Hemoglobin in Molecular Dynamics Simulation
(A–C) Projections of Hb structures during simulations T.HC3-1 to T.HC3-3 on the two eigenvectors derived from a PCA of the T, R, and R2 X-ray structures. The color encodes the rotRMSD to the R X-ray structure. Full quaternary transitions of Hb occur during simulation time. (D) rotRMSD of simulations T.HC3-1 to T.HC3-3 to the R X-ray structure. | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693363.77/warc/CC-MAIN-20170925201601-20170925221601-00170.warc.gz | CC-MAIN-2017-39 | 452 | 2 |
http://forums.worden.com/keeploggedin.aspx?g=profile&u=1620002 | code | Worden Discussion Forum
Is there any way to scan for stocks prior to open in the morning and post market moves after market close.
Is there anyway for me to set an alert based on what the stock does during the opeining 5 minute candle.
For e.g., for a particular watchlist, I want all stocks alerted that closes at the high, low, top x% percentile, whateever is the criteria, etc. I want the alert based on the 1st 5 min candle. Is it possible? | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119642.3/warc/CC-MAIN-20170423031159-00426-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 444 | 4 |
http://scribblingseasprite.blogspot.com/2012/02/ | code | I cruise around the blogosphere reading the blogs that I follow, checking out new ones, bookmarking the ones that I like some of which will end up on one of the sidebars. Recently I've noticed a couple of things: blogs which were hard to read because of the color scheme, links which were almost invisible because of the chosen colors, having to scroll down through a lot of stuff to find what I am looking for, and so on.
Here are a few things to consider the next time you change your blog or website design.
- A white background with black or dark letters is easier for most people to read.
- White letters on a dark background is also not hard to read, but most people prefer having a lot of white space around what they read and you can't get that with a dark background.
- Make sure the colors for your links are clearly visible.
- Put the important information near the top.
- If you like strong colors, consider using them as accents instead of background colors.
In case you didn't notice, I tweaked my layout to get my other blogs of interest up closer to the top. Now, if I could just find a better picture for my background... | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525699.51/warc/CC-MAIN-20190718170249-20190718192249-00045.warc.gz | CC-MAIN-2019-30 | 1,138 | 8 |
https://www.litsy.com/web/book/10909/Slasher-Girls-Monster-Boys | code | I've just returned a lot of library books (run out of time on them) but I'm actually kinda glad- I can focus on some books I own now! 😂👍
Hopefully the tagged book won't be too scary for me lol 😳🤣
I meant to read this last Halloween, so better late than never. The first 3 stories are pretty good. | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526324.57/warc/CC-MAIN-20190719161034-20190719183034-00244.warc.gz | CC-MAIN-2019-30 | 308 | 3 |
http://dgozli.com/ | code | I’m an Assistant Professor of Psychology at University of Macau (since August 2016). My experimental work is concerned with understanding biases in perception and action, and what these biases reveal about human cognition. My theoretical work is concerned with the place of experimental psychology in a larger context of inquiry. I completed my PhD at University of Toronto (2015) under the supervision of Jay Pratt. I have also been a visiting researcher at University of Vienna (2014) and Leiden University (2015-16), where I worked with Ulrich Ansorge and Bernhard Hommel.
Gozli, D.G. (in press). Behaviour versus performance: The veiled commitment of experimental psychology. Theory & Psychology.
Gozli, D.G., Aslam, H., & Pratt, J. (2016). Visuospatial cueing by self-caused features: Orienting of attention and action-outcome associative learning. Psychonomic Bulletin & Review, 23, 459-467.
Gozli, D.G., Huffman, G., & Pratt, J. (2016). Acting and anticipating: Impact of outcome-compatible distractor depends on response selection efficiency. Journal of Experimental Psychology: Human Perception & Performance, 42, 1601-1614.
Gozli, D.G., Wilson, K.E., & Ferber, S. (2014). The spatially asymmetric cost of working memory load on visual perception. Journal of Experimental Psychology: Human Perception and Performance, 40, 580-591.
Teaching @ University of Macau
Systems & Theories in Psychology
Teaching @ University of Toronto
History of Psychology (Summer 2015) | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689413.52/warc/CC-MAIN-20170923014525-20170923034525-00368.warc.gz | CC-MAIN-2017-39 | 1,474 | 9 |
https://www.sololearn.com/discuss/1061343/how-can-we-unlock-free-code-camp-badge | code | How can we unlock free code Camp badge?
what is use of this free code Camp
link to freecodecamp here. go to settings
How do I generate a list with <N> unique elements randomly selected from another (already existing) list?
Given three integer matrices A(m*n),B(m*n) and C(m*n) . Print the one with more zero elements. beside code
I'm coding this for hours & can't make the value of the 5 random numbers round off to 2 decimal places like in the examples
Overloading + operator | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00254.warc.gz | CC-MAIN-2023-06 | 476 | 7 |
http://forums.pcsx2.net/Thread-Cant-get-PCSX2-to-work | code | I have tried the different plugins, and its the same issue for all of them.
I try to run the game from my disc drive, the window opens but its blank.
Its the same with all plugins.
Im running on a Q6600 and an 8800 GT card.
any suggestions ?
first whats your pcsx2 config and wich version are you using?
IM using Gsdx plugin SSE2, IM rendering in Drect3D9 hardware, everything else is default
try this run execute if on vista try run as admin mod
09-16-2009, 03:13 PM
(This post was last modified: 09-16-2009, 03:14 PM by twdnewh.)
Im using XP and im the admin user
09-16-2009, 03:18 PM
(This post was last modified: 09-16-2009, 03:20 PM by coder_fx.)
use another cdvd plugin try makeing an iso
and wich game are you trying to play? there is games that the emulator shows nothing so which game?
will try making an iso and let u know... any recommendations for a cdvd plugin ?
Ive tried god of war 2 and sly 2 so far
09-16-2009, 06:08 PM
(This post was last modified: 09-16-2009, 06:09 PM by Shadow Lady.)
Show us your settings so we can help better, did you follow the configuration guide? is there anything showed in the pcsx2 console window while this happens?
Core i5 3570k -- Geforce GTX 670 -- Windows 7 x64 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121528.59/warc/CC-MAIN-20170423031201-00138-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 1,212 | 21 |
http://www.greatcircle.com/firewalls/mhonarc/firewalls.199606/msg00249.html | code | > > What are some good methods to test UDP filters port by port?
> What about something like 'strobe' or 'netcat' ?
Well ... you often won't get back too much: there's no such thing
as a RST packet in UDP ... maybe you'll get ICMP port unreachable
but these may be supressed.
Best bet would be to put up a sniffer before and after the filter
element, hit it with the packets and see what's getting through
(or sent back to the source).
BTW, this is useful for TCP, too, especially when sending
non-standard format packets (like etcp and friends). | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00032-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 546 | 10 |
https://issues.jenkins.io/browse/JENKINS-64366 | code | Jenkins recently changed the name of the root breadbrumb from "Jenkins" to "Dashboard"
but this is confusing. whilst the root view may be a dashboard, it is also not the only dashboard. all the views / folders are really dashboards and there is nothing special about the root page that means it should be called the Dashboard (additionally plugins may create a special views that users also refer to as a dashboard).
Another issue is context menus here apply to Jenkins instance not a "dashboard" - the drop downs for "Manage Jenkins" (or other actions applicable to Jenkins) on a "Dashboard" is just weird.
the name should be reverted or changed to something that is not confusing | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370752.61/warc/CC-MAIN-20210305091526-20210305121526-00294.warc.gz | CC-MAIN-2021-10 | 681 | 4 |
https://grokbase.com/t/python/python-list/09bwbjt7nq/best-strategy-for-overcoming-excessive-gethostbyname-timeout | code | I'm writing a reliability monitoring app but I've run into a problem. I
was hoping to keep it very simple and single threaded at first but
that's looking unlikely now! The crux of it is this...
gethostbyname ignores setdefaulttimeout.
It seems gethostbyname asks the OS to resolve the address and the OS
uses it's own timeout value ( 25 seconds ) rather than the one provided
in setdefaulttimeout. 25 seconds of blocking is way too long for me, I
want the response within 5 seconds or not at all but I can see no
reasonable way to do this without messing with the OS which naturally I
am loathe to do!
The two ideas I've had so far are...
Implement a cache. For this to work I'd need to avoid issuing
gethostbyname until the cached address fails. Of course it will fail a
little bit further down i the calling code when the app tries to use it
and I'd then need it to refresh the cache and try again. That seems very
kludgey to me :/
A pure python DNS lookup. This seems better but is an unknown quantity.
How big a job is it to use non-blocking sockets to write a DNS lookup
function with a customisable timeout? A few lines? A few hundred? I'd
only need to resolve v4 addresses for the foreseeable.
Any comments on these strategies, or any suggestions of methods you
think might work better or be a lot easier to implement, warmly received. | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00358.warc.gz | CC-MAIN-2022-33 | 1,342 | 22 |
https://rizult.al/job/lcxfrwktxl | code | The Senior Developer will be a key member of our growing technical team. The work will focus on expanding the functionality and adding new features to our quantitative application. The candidate will have the opportunity to work on interesting and challenging projects using advanced technologies while learning and implementing state-of-the-art techniques. Creativity is both welcomed and encouraged. The role includes possibilities for growth, leadership and equity participation.
Implement novel network/algorithm flow visualization and interaction features
Assist with extending the functionality of our core Python engine and associated class libraries
Actively participate in product demos and marketing with client prospects
4+ years in a commercial software development role
Familiarity with Django & Python
Creativity and a passion for learning, scientifically-minded & intellectually curious | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00601.warc.gz | CC-MAIN-2023-14 | 901 | 7 |
http://lists.w3.org/Archives/Public/w3c-sgml-wg/msg03012.html | code | Re: Radical cure for BOS confusion
At 10:56 AM 1/7/97, Terry Allen wrote:
>[ ... ]
>| | Except for the name space pollution. I'll suggest that as XML
>| | is already into application-specific PIs, it could use a PI here, too.
>| I don't see the default interpretation of alink as name space
>| pollution, because any user who cares about such things would know
>| enough to override the default interpretation.
>It's one more special case to explain to users: "You can name your
>elements anything you like, but watch out for 'alink'."
Exactly! While I like the idea of defaulting tags (because it will appeal
to Joe and Jane HTML) I can't stomach the namespace pollution. It's more
complicated to describe, and it's the kind of thing that, if you ever
forget it, and use a tagname like alink, it will bite you. If you're a
conscientious sort, and use DTDs like you should, you might even miss the
problem altogether, since it will only fail in the absence of a DTD that
does _not_ declare alink as the AF.
And if you think that last sentence is confusing, parse it carefully,
and observer that all those negations are necessary.
This seems like a gotcha that is just waiting to waste people's time.
I am not a number. I am an undefined character.
David Durand [email protected] \ [email protected]
Boston University Computer Science \ Sr. Analyst
http://www.cs.bu.edu/students/grads/dgd/ \ Dynamic Diagrams
MAPA: mapping for the WWW \__________________________ | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887077.23/warc/CC-MAIN-20180118071706-20180118091706-00422.warc.gz | CC-MAIN-2018-05 | 1,469 | 25 |
https://community.spiceworks.com/topic/2192851-cisco-jabber | code | Anyone running Jabber in a view environment. We are having some issues with ours. Its a little hard to explain so here it goes.
We are running view 5.5 and jabber version 11.8.5. The VDI team uses Liquidware Profile Unity to load Local and Roaming profiles users. And all the machines are linked clones and no-persistent
So when we migrated over from Windows 7 to windows 10 some users had to re-enter their Jabber log-in credentials. After, the users re-enter their credentials, check the auto sign-in box, and logs in everything works. In the same session the user can logout of jabber, exit jabber, and when they open it back up it auto logs them back in. When the user logs off or shuts down for the day and comes in the next day jabber dose not auto logs-in them in. Jabber starts up, shows the username and the check mark in the auto log-in box. They basically have to re-enter their passwords to log-in. This problem is starting to happen to all users when they have to change their account passwords on active directory as well.
I talked with a Cisco tech but he was unable resolve this issue. He was able to give me some references to what the problem is. Jabber uses Windows API’s to encrypt the user credentials and probably can’t decrypt the credentials. Screening through the jabber logs there’s an error that looks like this:
[JCFCoreUtils::EncryptionUtils::decrypt] – Failed to decrypt [GetLastError=2148073483]
Finding these events, is confirmation that Jabber cannot decrypt the stored credentials.
The most common cause of this is because the encryption was done on a different machine. More specifically, only a user with logon credentials that match those of the user who encrypted the data can decrypt the data AND decryption can only be done on the computer where the data was encrypted OR with a roaming profile that can decrypt the data from another computer on the network.
Cisco does not have a fix for this. I have found this is not a Jabber specific problem it will occur with any application using Windows encryption API’s.
Ultimately, this is a Windows Roaming configuration issue, so the best resource for resolving this problem is going to be Microsoft.
Cisco had a prior case for this. In that case, Cisco development team advised engaging Microsoft to assist with finding the problem. Working with Microsoft they found and fixed the Roaming profile configuration. They did not provide details of how they fixed it.
Does anyone know how to fix this problem or can give advice on how the Roaming profile needs to be configured to resolve this issue? | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521883.7/warc/CC-MAIN-20220518083841-20220518113841-00540.warc.gz | CC-MAIN-2022-21 | 2,591 | 11 |
http://forums.webosnation.com/webos-synergy-synchronization/204823-webos-1-2-does-eas-work-2.html | code | Same thing happened to me (1and1 hosted Exchange). It was working fine with 1.1. After the update to 1.2 I noticed that nothing was syncing. So I removed the account and figured I'd add it back to see if that fixed it. Only now it refuses to validate the incoming mail server. I'm stuck now and have no way of adding it back. My account settings are correct and verified. The account itself is fine (works from desktop without issue).
Originally Posted by CruiSin
Error: "Unable to validate incoming mail server settings. Check the settings and try again."
I have confirmed that the Pre is actually reaching the server because if I change the server name, I get a different error: "Unable to connect to incoming mail server."
My IMAP accounts work fine (Gmail and Fastmail).
Anybody else having this problem? If so, which provider are you using? Maybe if we compare notes, we'll find a common cause and solution. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123097.48/warc/CC-MAIN-20170423031203-00439-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 912 | 6 |
https://keeng.fileplanet.com/apk | code | Keeng - The music social network in Vietnam
We updated the new version, which includes the following adjustments:
- Fix bugs
- Update design
If you have got suggestions, please send them to the email address: [email protected]
Potentially dangerous permissions
- CAMERA: Required to be able to access the camera device.
- GET_ACCOUNTS: Allows access to the list of accounts in the Accounts Service.
- READ_CONTACTS: Allows an application to read the user's contacts data.
- READ_EXTERNAL_STORAGE: Allows an application to read from external storage.
- READ_PHONE_STATE: Allows read only access to phone state, including the phone number of the device, current cellular network information, the status of any ongoing calls, and a list of any PhoneAccounts registered on the device.
- READ_SMS: Allows an application to read SMS messages.
- RECEIVE_SMS: Allows an application to receive SMS messages.
- RECORD_AUDIO: Allows an application to record audio.
- WRITE_EXTERNAL_STORAGE: Allows an application to write to external storage.
- ACCESS_NETWORK_STATE: Allows applications to access information about networks.
- ACCESS_WIFI_STATE: Allows applications to access information about Wi-Fi networks.
- CHANGE_WIFI_STATE: Allows applications to change Wi-Fi connectivity state.
- DISABLE_KEYGUARD: Allows applications to disable the keyguard if it is not secure.
- INSTALL_SHORTCUT: Allows an application to install a shortcut in Launcher.
- INTERNET: Allows applications to open network sockets.
- SYSTEM_ALERT_WINDOW: Allows an app to create windows using the type TYPE_SYSTEM_ALERT, shown on top of all other apps. Very few apps should use this permission; these windows are intended for system-level interaction with the user.
- UNINSTALL_SHORTCUT: This permission is no longer supported.
- VIBRATE: Allows access to the vibrator.
- WAKE_LOCK: Allows using PowerManager WakeLocks to keep processor from sleeping or screen from dimming. | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323067.50/warc/CC-MAIN-20190825042326-20190825064326-00126.warc.gz | CC-MAIN-2019-35 | 1,937 | 25 |
https://www.neowin.net/editorials/thank-you-microsoft-for-the-new-volume-flyout-but-minor-updates-shouldn039t-take-this-long/ | code | With the release of Windows 8 nearly a decade ago, Microsoft introduced a new flyout for volume controls. And while you would expect that the UI would be updated with subsequent releases of Windows in order to align it with the changing design of the OS, this was inexplicably not the case. In fact, even Windows 11 shipped with the same decade-old interface for volume controls. Although the latest Dev Channel build 22533 finally updates this UI to align with the rest of the OS, I do have a bone to pick with Microsoft, especially since the change isn't available for general release yet.
Let's start off by talking about the existing UI available in general releases. A screenshot of it can be seen above in my updated Windows 10 build. You can clearly see how disparate the volume flyout in the top left looks when compared to the brightness slider in the bottom right. It's like they're from completely different versions of Windows, which they are, to be fair.
Now, for a moment, let's put aside the fact that the volume overlay is obtrusive - and something that people have been complaining about for years - and just talk about how something as obvious as this is making its way to production software.
While I have been involved in software development professionally and academically for the past nine years, I wouldn't presume to know about the intricacies or complexities of Windows development at all. And that's okay, my complaint is mainly that this is purely a cosmetic change, surely it didn't need a decade of man-hours for Microsoft to finally update a small portion of its UI to make it consistent with the rest of its OS? Yet here we stand.
I want to emphasize that the new volume flyout is functionally the same as the existing one, a screenshot from the latest Dev Channel build channel build can be seen above. That said, according to some of our forum threads, it may need further work.
Given that the new UI is finally out for Insiders and is apparently consistent with the rest of the OS, you could write this piece off as a rant from an irate Windows user. But do ponder over what reasoning, or lack thereof, came into play when making decisions that involved revamping the entire look of Windows 10 and Windows 11, but then leaving the volume flyout as-is, for some reason. Some may even say that it's a minor thing that can easily be ignored, I would argue that it indicates a deeper problem in the Windows development process where blatantly inconsistent and everyday use interfaces are ignored.
At best, it represents Microsoft prioritizing more important things in Windows development, and at worst, it represents a total lack of attention to detail or the importance of a consistent UI that is being utilized by millions of users.
The general releases of both the operating systems still contains a legacy interface for volume flyout and maybe one day, Microsoft will publish a blog post explaining its reasoning. But until then, I will continue to wonder why such a minor but blatant cosmetic change took over a decade to implement.
What are your thoughts on the apparent snail-paced development and implementation of an updated volume flyout that is consistent with the rest of OS? Why do you think it took so long? Let us know in the comments section below! | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100745.32/warc/CC-MAIN-20231208112926-20231208142926-00621.warc.gz | CC-MAIN-2023-50 | 3,295 | 9 |
https://packages.debian.org/zh-tw/squeeze/kfreebsd-amd64/dmake | code | 套件: dmake (1:4.12-2)
make utility used to build OpenOffice.org
Dmake is a make utility similar to GNU make or the Workshop dmake.
This utility has an irregular syntax but is available for Linux, Solaris, and Win32 and other platforms. This version of dmake is a modified version of the original public domain dmake, and is used to build OpenOffice.org.
* support for portable makefiles * portable across many platforms * significantly enhanced macro facilities * sophisticated inference algorithm supporting transitive closure over the inference graph * support for traversing the file system both during making of targets and during inference * %-meta rules for specifying rules to be used for inferring prerequisites * conditional macros * local rule macro variables * proper support for libraries * parallel making of targets on architectures that support it * attributed targets * text diversions * group recipes * supports MKS extended argument passing convention * directory caching * highly configurable
其他與 dmake 有關的套件 | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448399326483.8/warc/CC-MAIN-20151124210846-00036-ip-10-71-132-137.ec2.internal.warc.gz | CC-MAIN-2015-48 | 1,048 | 6 |
https://manual.zoner.com/importing-and-exporting-descriptions-93a7bcd/ | code | Some programs generate and save simple one-line file descriptions in special, and quite non-standardized, files located in the same folders as the files being described. These files are generally have names like descript.ion, 0index.txt, files.bbs etc. Use Information | Data Import/Export | Export Descriptions and Import Descriptions to create such files or to bring the information from such files into pictures' standard picture information. During export, you can choose whether to use the Title or Description field as the description.
If the pictures already had description files, these will be overwritten by your export. If you check the Preserve remaining files' descriptions box, then the descriptions for any pictures in the folder that were not selected will be kept as-is; otherwise, they will be thrown away.
If you turn on Give exported file Hidden attribute, then the file will receive the Hidden file attribute. This will make it invisible to most programs. | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250619323.41/warc/CC-MAIN-20200124100832-20200124125832-00367.warc.gz | CC-MAIN-2020-05 | 976 | 3 |
http://stackoverflow.com/questions/51211/e-commerce-tutorials/52209 | code | I've built 8 different E-Commerce sites now. I would highly recommend going with an existing framework or solution as it can be much more involved than you might think at first glance. The framework or solution you choose is really dependent on the complexity you need.
In the PHP/LAMP world, Magento Commerce has huge momentum, is very popular, and has many great strengths but its code is complex and some aspects of the Administrative interface can be confusing. I've used OSCommerce and it has some good points but it's terribly dated and the new version is 5 years behind schedule. Zen Cart is a spin off of OSCommerce which is more polished, regularly updated, and solid.
In the ASP.NET world, I've found AbleCommerce to be the best solution. NOPCommerce is an open source solution that has lots of strengths but they are currently migrating to MVC and I'd probably wait for that transition to settle out a bit before using it. DashCommerce is another Open Source offering but it seems to have lost momentum. I've also heard good things about BVCommerce but haven't used it. I would stay away from AspDotNetStorefront, MS Commerce Server, and the MediaChase ECF, way too many negative comments.
Another option would be to go with one of the many Hosted solutions like Shopify, Ultracart, or a Yahoo shop. These provide a low cost high value option if you don't need high levels of customization.
Also, before you start or choose anything, you should get a clear sense of your requirements. I've written an article that highlights the possible requirements for your site: http://www.efficionconsulting.com/Blog/itemid/649/amid/1500/e-commerce-discussion-points | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701445114/warc/CC-MAIN-20130516105045-00076-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 1,665 | 5 |
https://testcon.lt/vasudevan-swaminathan/ | code | TestCon Europe 2020
Vilnius and Online
Testing/Test Automation Professional, Company President & Principal Consultant
Zuci Systems, USA
A seasoned Testing/Test Automation professional with about 20 years experience. President and Principal Consultant at Zuci Systems, a 80 member organization headquartered in Chicago, Illinois focused on building intelligent automation solutions for the Banking, Retail and Energy sectors using Cognitive Automation tools and technologies such as Artificial Intelligence, Machine Learning and Deep Learning.
Test Automation: An Enigma That Continues to Haunt
With a plethora of technologies and disciplines flooding the Testing/Test Automation segment in the past 5 years aimed at improving application quality, are organizations setting up test automation using these tools to derive the expected benefits? | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529331.99/warc/CC-MAIN-20210122113332-20210122143332-00531.warc.gz | CC-MAIN-2021-04 | 842 | 7 |
https://community.infiniteflight.com/t/moderator-approving-topic/487158 | code | I created a topic in RWA a few days back and it said it required mod approval but I have never heard back.
Am I being impatient or did it get declined?
Hello Tomjet073, i think topic is declined.
He should’ve received a notification that it was declined.
PM a moderator :) They should be able to resolve it.
To add on, usually, it takes under a couple hours for a RWA topic to be approved. At this point, if you’ve waited a few days, PM a moderator like I stated above!
The topic was declined. There is no automated response generated when a topic is declined, one must be manually sent to the user. If you would like to inquire on the status of your topic, feel free to reach out to @moderators and we’ll take a look. | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107871231.19/warc/CC-MAIN-20201020080044-20201020110044-00250.warc.gz | CC-MAIN-2020-45 | 724 | 7 |
https://www.techgig.com/question/42652/noah-are-different-wcf-communication-service-mvc-design-pattern-web-applications-wcf-windows-communication-foundation-mvc-model-view-controller-design | code | Noah, both are different. WCF is a communication service and MVC is a design pattern for Web based applications. WCF stands for Windows Communication Foundation. MVC stands for Model View Controller (a design pattern) Similarly we have many design patternsMVP and MVVM which are mostly used for windows based application. Off course MVVM is most popular now than MVP because of easily testable than MVP pattern.
No answer posted to the question. Be the first to post a answer | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110792.29/warc/CC-MAIN-20170822143101-20170822163101-00119.warc.gz | CC-MAIN-2017-34 | 475 | 2 |
http://shikki.blog66.fc2.com/blog-entry-532.html | code | Who did make the sky in the earth ugly other than human?
Why I took these photographs is that it is inspired from
Who did make the sky in the earth ugly other than human? (2009/08/30)
Neputa and Nebuta (2009/08/23)
I like to pass eye of a needle. (2009/04/11)
I don't think “no music, no life” (2009/02/09)
2009 -new year- leather and prunella (2009/01/02) | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815951.96/warc/CC-MAIN-20180224211727-20180224231727-00576.warc.gz | CC-MAIN-2018-09 | 360 | 7 |
http://serverfault.com/questions/159762/xp-ping-changes-routing-table/159891 | code | I have got a real strange behaviour with one of my XP-Sp3 machines.
Setup: A Server in the lan (192.168.5.0) proviedes access to all roadwarriors in 10.8.0.0 The DCHP has a static route for all clients pronouncing 192.168.5.235 as gateway for 10.8.0.0
All Clients can ping & access the vpn-machines; everything works like a charm
But one Xp-Sp3 is not willing to connect to them. It gets all the same routes as any other sytem in the lan and I trippel-checked - there are no static routes on this machine
When I ping any 10.8.0.0 device from this machine, the first two packaged work like a charm; but the next two (and any package after them) fail and get lost.
When I look back into the routing table: There is a new route; a special one just for the device I pinged, which points to the right gateway - but which wasn't there earlier...
As Long as this route exists the machine can't ping anything on 10.8.0.0. But if I remove the route by hand: The next to ping packages work fine...
Has anybody got an idea about that? Anybody every seen such a behaviour? Any hint / help / tip is greatly appreachiated!
thx in advance
Ps: I attach an image of the cmd to clarify things - its in german, but reading a routing table shouldn't be that hard... | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825358.53/warc/CC-MAIN-20160723071025-00011-ip-10-185-27-174.ec2.internal.warc.gz | CC-MAIN-2016-30 | 1,245 | 10 |
https://forums.ni.com/t5/Real-Time-Measurement-and/cRIO-in-safety-application/td-p/3613203?profile.language=en | code | I'm looking into precedence of FPGA use in safety applications and wonder if cRIO with FPGA as a PAC can be used in safety-credited systems requiring fast (decades of us) response. To be specific, I'm looking for the following:
- Are there examples of cRIO use in systems with SIL 2 and higher?
- If yes, how is the proper level of functional safety achieved? How redundancy/diversity can be introduced into hardware or logic?
- What would be the verification/validation process?
What are you trying to do with it? (end application)
Also, what inputs are you using to determine required overall response time? E.g. sensor read time + wire traversal to module + read time + application processing time + output to pin + wire traversal + actuator
10's of us is very fast for SIL. 100's of us is still pretty fast for SIL applications. Is this being specified to you (e.g. as a system requirement)?
Thanks Andrew for the prompt response.
There are voltage signals from detectors that have to be digitized at 25kHz or more, 16 bit.
The reading should be accumulated over a running integration window.
A buffer can be used for this with new sample added, the oldest removed, so that if the sum of the buffer exceeds certain level, action should be taken.
Overall budget from event to action is 200 us, some time should be reserved for signal propagation (hundreds of meters) and mitigation devices to act on system trip.
This leaves 100us or less for processing electronics.
We used a cRIO on an application with a high degree of safety criticality - a moving motion compensation gangway for personnel transfer (see this case study - http://sine.ni.com/cs/app/doc/p/id/cs-14813). We didn't develop the safety case or the detailed safety functionality (we just implemented it, our skills is more in the control), but the safety methodology was reviewed, approved and certified (once built and tested) by a marine certification body. I am sure some SIL rating could have been developed for the system, but I wasn't aware that.
In this case the safety functionality was handled by a combination of:
This was all verified as part of a very detailed acceptance test, approved and witnessed by the certification body.
This was all very specific to this system, and probably not easily transferable to other applications. These white papers may be helpful for some basic insight, but I expect you are beyond that:
Hope this helps
Thanks for the post, Andy.
OP, do you need to pursue & receive an ACTUAL SIL certification for your end system?
Occasionally questions about SIL come up, but they end with the realization that MTBF numbers are sufficient. I don't know of any customers who have actually certified a cRIO to a SIL.
Understood, thank you all for the replies
The MTBF in our application should be the same as in other SIL2 systems, no other requirements specified.
My main argument for the cRIO use so far is that it is "proven in use". Also fail-safe operation can be explicitly programmed.
The downside is that there is no diversity in signal processing.
We have MTBF numbers for many of our cRIOs. If this is what you need, start a conversation with your local support (Applications Engineering) office (can email [email protected]). Please let them know which hardware model you're working with and reference this thread when creating your support request.
Just in case this is helpful - I was talking to one of NI sales engineers and they said NI will be launching a couple of SIL 3 rated C-Series modules (Digital IO and some limited analogue IO) that can be used for safety systems. These are configurable to provide logic solver functionality, but note they are not at all integrated into LabVIEW or the cRIO FPGA - they use the cRIO chassis simply to house them.
Having a COTS SIL 3 IO modules would be nice, but if they are not going to talk to FPGA and/or LabVIEW, they will act as a safety PLCs (probably with faster signal processing).
After talking to local FPGA sales engineers, I'm inclining to make a redundant FPGA design on two chips from scratch.
Thanks everybody for the help! | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00213.warc.gz | CC-MAIN-2021-04 | 4,108 | 30 |
https://www.mpug.com/topic/best-practices-example-mpp-files/?bbp_reply_to=415036&_wpnonce=d2c8ca81cb | code | I have, in the distant past, when learning to produce engineering drawings using AutoCAD, acquired actual building construction hardcopy drawings from the local construction association. These were very useful…learning by example from the work of others far more skilled than I on how to create quality drawings using standardized drafting methods.
In all Internet searches I have done I have never found even one actual MS Project file in the completed state to use as a reference. This does seem a wee bit strange to me. I have found several somewhat simplistic MPP files however never any actual files used in industry.
Any ideas on finding professionally designed MS Project MPP files (redacted where necessary) that could be used as examples of “best practices” in the design of MS Project files ? KMD
@Keith, I’ve long thought of putting together a public repository of completed projects (needed for things like this and this – and even this). I don’t know how we PMs are ever going to benefit from Big Data until we get some. But to your need, I have seen many completed projects, and even have a stash, but can’t share due to NDAs / Confidentiality agreements. .MPPs can’t be redacted, and still function.
If anyone here is interested in starting a public repository of .MPPs, let meknow! Cheers, Jigs | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00396.warc.gz | CC-MAIN-2021-17 | 1,326 | 5 |
https://www.awesomeappliancerepair.com/blog/post/which-country-does-the-most-good-for-the-world | code | It's an unexpected side effect of globalization: problems that once would have stayed local—say, a bank lending out too much money—now have consequences worldwide. But still, countries operate independently, as if alone on the planet. Policy advisor Simon Anholt has dreamed up an unusual scale to get governments thinking outwardly: The Good Country Index. In a riveting and funny talk, he answers the question, "Which country does the most good?" The answer may surprise you (especially if you live in the US or China).
TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more.
Find closed captions and translated subtitles in many languages at http://www.ted.com/translate
Follow TED news on Twitter: http://www.twitter.com/tednews
Like TED on Facebook: https://www.facebook.com/TED
Subscribe to our channel: http://www.youtube.com/user/TEDtalksDirector | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00515.warc.gz | CC-MAIN-2023-14 | 1,116 | 6 |
https://answers.sap.com/questions/1087456/check-table-values.html | code | While going through this standard SAP program 'BCALV_GRID_EDIT'. I came through this interesting piece of code for populating field catalog of an OO ALV.
gs_fieldcat-fieldname = 'CURRENCY'.
gs_fieldcat-ref_table = 'SFLIGHT'.
gs_fieldcat-edit = 'X'.
<i>gs_fieldcat-checktable = '!'.</i>
gs_fieldcat-auto_value = 'X'.
append gs_fieldcat to gt_fieldcat.</b>
Can Somebody please explain me the use of this EXCLAIMATION MARK ⚠️, in the checktable value in details? | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00214.warc.gz | CC-MAIN-2023-06 | 463 | 8 |
https://forum.powerscore.com/viewtopic.php?f=288&p=81201 | code | - Fri Mar 22, 2019 7:11 pm
Hey Faith, let me see if I can help! The tasks are FWTSP, in that order, and they can be done in the course of either 2 days or else 3 days. Why not one day? Because the rules tell us that T and P cannot be the same day as each other. So, what are the options for tasks done per day? Let's run through them:
If the tasks are done in two days, the break between day 1 and day 2 has to be after Taping has been completed. That way we can be sure that Taping and Priming are on different days. So, we could do it this way:
Day 1: FWTS; Day 2: P (a 4-1 distribution of tasks to days)
Day 1: FWT; Day 2: SP (a 3-2 distribution)
So far, so good. Now, what if we take 3 days to do the job? We still need at least one break between T and P somewhere. Let's start with as many tasks on Day 1 as possible:
Day 1: FWT; Day 2: S; Day 3: P (that's a fixed 3-1-1 distribution, three tasks on the first day and one task per day for each of the other two days)
Now let's scale back Day 1 to just 2 tasks, which gives us these options:
Day 1: FW; Day 2: TS; Day 3: P (a fixed 2-2-1 distribution)
Day 1: FW; Day 2: T; Day 3: SP (a fixed distribution of 2-1-2)
Finally, what if there is only one task on Day 1? Here's what we could get:
Day 1: F; Day 2: WTS; Day 3: P (1-3-1)
Day 1: F; Day 2: WT; Day 3: SP (1-2-2)
That's it, there are no other ways to slice this one up and still keep T and P on different days. Now, notice that there are three different fixed variations where two days have two tasks each and one day has just one task. There's a 2-2-1, a 2-1-2, and a 1-2-2. That's why in the book we just lump them together as a single "unfixed" distribution of 1-2-2. That just means that there is more than one way to use that combination of numbers for tasks per day.
Why didn't we write out all 7 fixed distributions? Because that takes a lot of time! Also, there are only 5 questions, so 7 distributions are overkill. Instead, I would advise just looking at the unfixed distributions. How can I divide 5 things among 2 days, with at least one task per day? It's either 4-1 or else 3-2, unfixed (except they actually do turn out to be fixed, in this case). How do I do 5 into 3, with at least one per day? 3-1-1 or 2-2-1, both unfixed. 4 distributions, 5 questions - that's more like it!
Now, how do we diagram the day component? We could do it as shown in the book, with either two or three columns to represent the days and the appropriate number of slots for the number of tasks done each day, or else we could try keeping the five tasks in a straight line and putting breaks between them. For example, question 20 establishes a 2-1-2 fixed distribution just in the way the question is asked (two people working on the first day and the same two working again on the third day), and that could be drawn like this:
_ _ | _ | _ _
FW T SP
From there it's just a question of who can do which of those tasks!
I hope that makes it clearer for you! Give that a try and see if it works out, and let us know if you need further help with it.
Adam M. Tyson
PowerScore LSAT, GRE, ACT and SAT Instructor
Follow me on Twitter at https://twitter.com/LSATadam | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362741.28/warc/CC-MAIN-20210301151825-20210301181825-00005.warc.gz | CC-MAIN-2021-10 | 3,163 | 23 |
https://www.followjoshua.com/ | code | Hey there! Welcome to FollowJoshua.
Clearly, my name is Joshua. 🙂 I’m an Australian who has built several online ventures.
This is my blog where I help people start or continue their journey online.
Essentially, how people can get started with affiliate marketing quite easily, even without any prior experience.
Through this journey, I’ve learned some key principles:
- Work hard, fail often (Then work a whole lot more…)
- Share the truth, even if it reduces sales and conversions
- Lead by example with full disclosure and transparency
- Give back so you can be a beacon of light for others to follow.
But there’s one thing in particular that I’ve learned, that is applicable to the majority of my audience.
It is this: It’s easy to start a business, it’s just hard to play the long game. To stick consistently at one thing, over and over again.
To often people are dazzled by shiny lights or “get rich quick” scenarios from gurus and mentors. Most haven’t decided whether this online journey will suit their personality.
I’m known for saying that this is a game for the introverts. There is a certain type of person who can park themselves in front of the computer all day long. That’s me, and that could be you.
At the end of the day, I’m just a normal guy. I enjoy working smart as a guide and friend to the confusing world of making money through leveraging digital resources. | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355937.26/warc/CC-MAIN-20210225211435-20210226001435-00030.warc.gz | CC-MAIN-2021-10 | 1,412 | 14 |
https://hataftech.com/machine-learning/advanced-machine-learning-algorithms-a-detailed-guide/ | code | Advanced Machine Learning Algorithms
Advanced Machine Learning Algorithms (AMLAs) are a subset of machine learning techniques that go beyond traditional methods. They exhibit enhanced capabilities such as deep learning, reinforcement learning, and ensemble methods. AMLAs are characterized by their ability to handle complex tasks, learn intricate patterns, and make accurate predictions in diverse domains.
Table of Contents
How do Advanced Machine Learning Algorithms differ from traditional approaches?
While traditional machine learning focuses on linear relationships and basic patterns, AMLAs delve into non-linear structures and intricate correlations. The key differentiators include the use of neural networks, deep architectures, and advanced optimization techniques, allowing AMLAs to handle more sophisticated data and deliver superior performance in various applications.
What are the real-world applications of Advanced Machine Learning Algorithms?
AMLAs find applications across a wide spectrum of industries. From healthcare and finance to marketing and robotics, these algorithms empower systems to recognize speech, translate languages, predict stock prices, and even control autonomous vehicles. The versatility of AMLAs makes them indispensable in solving complex problems that traditional approaches struggle to address.
How does Deep Learning contribute to Advanced Machine Learning Algorithms?
Deep Learning, a subset of AMLAs, involves neural networks with multiple layers (deep neural networks). This enables the model to automatically learn hierarchical features from data, allowing for more accurate and nuanced predictions. Deep Learning plays a pivotal role in image and speech recognition, natural language processing, and many other cutting-edge applications.
Implementing Advanced Machine Learning Algorithms?
Implementing AMLAs comes with its set of challenges. Data privacy concerns, the need for substantial computing power, and interpretability issues are common hurdles. Additionally, the demand for skilled professionals well-versed in AMLAs poses a challenge for organizations looking to leverage these advanced techniques effectively.
Ethical considerations associated with the use of Advanced Machine Learning Algorithms
Ethical considerations are paramount when deploying AMLAs. Issues such as bias in algorithms, transparency, and the potential misuse of advanced techniques raise ethical concerns. Striking a balance between technological advancement and ethical responsibility is crucial to ensure the responsible use of AMLAs in various domains.
How can businesses integrate Advanced Machine Learning Algorithms into their operations?
Integrating AMLAs into business operations involves understanding the specific needs of the organization and identifying areas where advanced techniques can add value. This may include optimizing supply chain processes, enhancing customer experience, or improving decision-making through predictive analytics. Collaborating with experts and investing in training are key steps for successful integration.
Ensemble Methods in Advanced Machine Learning
Ensemble methods, a subset of Advanced Machine Learning Algorithms, are powerful techniques that involve combining multiple models to improve predictive performance and reduce overfitting. Let’s delve into the intricacies of ensemble methods and understand how they contribute to the success of AMLAs.
Understanding the Concept of Ensemble Learning
Ensemble learning involves combining the predictions of multiple models to achieve better accuracy and generalization than individual models. The underlying idea is that diverse models, when combined, can compensate for each other’s weaknesses, leading to a more robust and reliable overall prediction.
Types of Ensemble Methods
There are two main types of ensemble methods: bagging and boosting. Bagging, short for bootstrap aggregating, creates multiple subsets of the training data and trains each model independently. Random Forest is a popular example of a bagging ensemble method. Boosting, on the other hand, focuses on improving the weaknesses of individual models sequentially. Gradient Boosting and AdaBoost are well-known boosting techniques.
The Power of Diversity in Ensembles
The success of ensemble methods relies on the diversity of the individual models. If all models in an ensemble are similar, the benefits of ensemble learning are diminished. Therefore, it’s crucial to use different algorithms or tweak parameters to ensure diversity. The diversity allows the ensemble to capture a broader range of patterns and nuances in the data.
Overcoming Overfitting with Ensemble Methods
Overfitting occurs when a model learns the training data too well, capturing noise and producing poor generalization on new data. Ensemble methods address overfitting by combining multiple models, each trained on different subsets of data. This helps prevent overfitting on specific patterns present in only one subset, leading to a more generalized and accurate prediction.
Random Forest: A Robust Bagging Ensemble
Random Forest is a popular bagging ensemble method that combines the predictions of multiple decision trees. Each tree is trained on a random subset of the data, and the final prediction is determined by aggregating the outputs of individual trees. Random Forest is known for its robustness, scalability, and ability to handle high-dimensional data.
Boosting Techniques for Improved Predictions
Boosting focuses on sequentially improving the performance of weak learners. Gradient Boosting, for example, builds trees sequentially, with each tree correcting the errors of the previous one. AdaBoost assigns weights to instances in the dataset, with misclassified instances receiving higher weights, forcing subsequent models to focus more on those instances.
Challenges and Considerations in Ensemble Methods
While ensemble methods offer significant advantages, they are not without challenges. The increased complexity of ensembles can make them computationally expensive, requiring substantial resources. Additionally, selecting the right combination of models and parameters is crucial for optimal performance.
Real-World Applications of Ensemble Methods
Ensemble methods find applications in various domains, including finance, healthcare, and cybersecurity. They are particularly useful when dealing with large datasets, complex patterns, and situations where accurate predictions are critical. The versatility of ensemble methods makes them a valuable tool in the toolkit of data scientists and machine learning practitioners.
Ensuring Model Interpretability in Ensemble Learning
Interpretability is a crucial aspect of machine learning, especially in domains where decisions impact individuals’ lives. Ensemble methods, with their combination of multiple models, can pose challenges in interpretability. Techniques such as feature importance analysis and model-agnostic interpretability tools can help unravel the insights derived from ensemble models.
The Future of Ensemble Methods in Machine Learning
As machine learning continues to evolve, ensemble methods are likely to play an increasingly important role. Research in improving the efficiency and interpretability of ensembles will pave the way for their broader adoption. The synergy between ensemble methods and other advanced algorithms will contribute to the development of more robust and powerful machine learning models.
Navigating The Reinforcement Learning in Machine Learning
Reinforcement Learning (RL) is a paradigm of machine learning where agents learn to make decisions by interacting with an environment. In the realm of Advanced Machine Learning Algorithms, RL stands out for its ability to excel in dynamic and complex scenarios. Let’s explore the fundamentals and applications of Reinforcement Learning.
The Essence of Reinforcement Learning
Reinforcement Learning is centered around the concept of an agent learning to take actions in an environment to maximize a cumulative reward signal. Unlike supervised learning, RL does not rely on labeled datasets but instead learns through trial and error. The agent explores the environment, receives feedback in the form of rewards or penalties, and adjusts its strategy to optimize its actions over time.
Components of Reinforcement Learning
RL involves three main components: the agent, the environment, and the reward signal. The agent takes actions in the environment, and the environment responds by transitioning to a new state and providing a reward signal. The goal of the agent is to learn a policy—a mapping from states to actions—that maximizes the cumulative reward over time.
Exploration vs. Exploitation Dilemma in Reinforcement Learning
One of the key challenges in RL is the exploration vs. exploitation dilemma. The agent must balance exploring new actions to discover better strategies (exploration) and exploiting known strategies to maximize immediate rewards (exploitation). Striking the right balance is crucial for effective learning and optimal decision-making.
Reinforcement Learning Algorithms
There are various algorithms in RL, each suited to different types of problems. Q-Learning, Deep Q Network (DQN), Policy Gradient Methods, and Actor-Critic are among the popular RL algorithms. These algorithms vary in their approach to learning and decision-making, making them applicable to a wide range of scenarios.
Deep Reinforcement Learning: Merging RL with Deep Learning
Deep Reinforcement Learning (DRL) combines the principles of RL with deep neural networks. This integration enables the handling of high-dimensional input spaces, making DRL suitable for tasks such as image recognition and complex control problems. DRL has achieved remarkable success in domains like playing games and robotic control.
Applications of Reinforcement Learning
Reinforcement Learning finds applications in diverse fields. In robotics, RL is used for training robotic arms to perform complex tasks. In finance, RL aids in portfolio optimization and algorithmic trading. RL also powers advancements in natural language processing, autonomous vehicles, and healthcare, showcasing its adaptability to various domains.
Challenges and Considerations in Reinforcement Learning
Despite its successes, RL faces challenges such as sample inefficiency, instability during training, and the need for careful tuning of hyperparameters. Addressing these challenges requires ongoing research and innovation in algorithm design and training methodologies.
Transfer Learning in Reinforcement Learning
Transfer Learning extends its influence to RL, allowing agents to leverage knowledge gained from one task to improve performance in a related task. This is particularly beneficial in scenarios where gathering data for each task is resource-intensive. Transfer Learning in RL accelerates learning and enhances the adaptability of agents to new environments.
Ethics in Reinforcement Learning
As with any advanced technology, ethical considerations are crucial in RL. Issues such as biased rewards, unintended consequences of learned policies, and the impact of RL on society need careful examination. The responsible development and deployment of RL systems are essential to mitigate potential risks and ensure ethical use.
The Future Landscape of Reinforcement Learning
The future of Reinforcement Learning holds exciting possibilities. Ongoing research aims to address current challenges, making RL more accessible and applicable to a broader range of problems. As RL continues to evolve, its integration with other advanced machine learning techniques will contribute to the development of more intelligent and adaptive systems.
Transfer Learning in Advanced Machine Learning
Transfer Learning is a paradigm in machine learning where a model trained on one task is repurposed for a related task, leveraging knowledge gained from the source task. In the realm of Advanced Machine Learning Algorithms, Transfer Learning plays a crucial role in enhancing model performance, especially in scenarios with limited labeled data.
Understanding Transfer Learning
Transfer Learning addresses the challenge of limited labeled data for a target task. Instead of training a model from scratch, Transfer Learning starts with a pre-trained model on a source task and fine-tunes it for the target task. This approach capitalizes on the knowledge encoded in the pre-trained model, allowing for more efficient and effective learning.
Types of Transfer Learning
There are two main types of Transfer Learning: feature-based and model-based. Feature-based Transfer Learning involves using the learned features from the source task in the target task. Model-based Transfer Learning goes a step further, transferring not only features but also the entire model or parts of it. The choice between these types depends on the similarity between the tasks and the available data.
The Role of Pre-trained Models in Transfer Learning
Pre-trained models, often trained on large datasets for tasks like image classification or natural language processing, serve as the foundation for Transfer Learning. These models capture general features from the source task, which can be repurposed for a new task. Common pre-trained models include ImageNet-trained models for computer vision tasks and language models for natural language processing.
Fine-tuning and Transfer Learning
Fine-tuning is a crucial step in Transfer Learning. After initializing the model with pre-trained weights, the model is fine-tuned on the target task’s data. This process allows the model to adapt to the nuances of the target task while retaining the knowledge gained from the source task. Careful consideration of hyperparameters during fine-tuning is essential for optimal performance.
Addressing Domain Shift in Transfer Learning
Domain shift refers to the difference between the distribution of data in the source and target domains. Transfer Learning may face challenges when the source and target domains exhibit significant differences. Techniques such as domain adaptation and adversarial training aim to mitigate the impact of domain shift, ensuring the model’s robustness across diverse datasets.
Applications of Transfer Learning
Transfer Learning finds applications in various domains. In computer vision, models pre-trained on large datasets for image classification can be fine-tuned for specific recognition tasks. In natural language processing, pre-trained language models enable more efficient training for tasks like sentiment analysis and text classification. Transfer Learning is a valuable tool for domains with limited annotated data.
Comparative Analysis of Machine Learning Algorithms
Choosing the right machine learning algorithm for a specific task is a critical decision that significantly impacts the success of a project. Let’s delve into a comparative analysis of various machine learning algorithms to understand their strengths and applications.
Linear Regression: Predicting Numerical Values
Linear regression is a supervised learning algorithm used for predicting numerical values. It establishes a linear relationship between the input features and the target variable. This algorithm is commonly applied in scenarios where the goal is to estimate a continuous outcome.
Key Features of Linear Regression:
- Simple Yet Effective: Linear regression is straightforward and easy to implement, making it suitable for quick predictions.
- Interpretability: The model’s coefficients provide insights into the impact of each feature on the predicted outcome.
- Assumption of Linearity: Linear regression assumes a linear relationship between input features and the target variable.
- Financial Forecasting: Predicting stock prices or currency exchange rates.
- Sales Prediction: Estimating future sales based on historical data.
Decision Trees: Powering Classification Tasks
Decision trees are versatile machine learning algorithms used for both classification and regression tasks. These tree-like structures make decisions by traversing from the root to the leaves based on input features.
Key Features of Decision Trees:
- Intuitive Decision-Making: Decision trees provide a transparent and easy-to-understand decision-making process.
- Handling Nonlinear Relationships: Decision trees can capture complex nonlinear relationships within the data.
- Prone to Overfitting: Without proper tuning, decision trees may overfit the training data.
- Medical Diagnosis: Identifying diseases based on patient symptoms.
- Credit Scoring: Assessing creditworthiness of individuals.
K-means Clustering: Unveiling Patterns in Unsupervised Data
K-means clustering is a popular unsupervised learning algorithm used for grouping similar data points. It aims to partition the data into k clusters, where each data point belongs to the cluster with the nearest mean.
Key Features of K-means Clustering:
- Efficient and Scalable: K-means is computationally efficient and suitable for large datasets.
- Requires Specifying the Number of Clusters: The number of clusters (k) needs to be defined beforehand.
- Sensitive to Initial Centroid Positions: Results may vary based on the initial placement of centroids.
- Customer Segmentation: Grouping customers based on purchasing behavior.
- Anomaly Detection: Identifying outliers in financial transactions.
Neural Networks: Powering Deep Learning
Neural networks, inspired by the human brain, have revolutionized the field of machine learning through deep learning. These interconnected layers of nodes are capable of learning intricate features and patterns from vast amounts of data.
Key Features of Neural Networks:
- Capacity for Complex Tasks: Neural networks excel in tasks such as image recognition, natural language processing, and speech synthesis.
- Requires Large Datasets: Deep learning models often require substantial amounts of labeled data for training.
- Black Box Nature: Understanding the decision-making process of neural networks can be challenging.
- Image Recognition: Identifying objects in images or videos.
- Language Translation: Translating text from one language to another.
Comparative Analysis Table
Let’s summarize the comparison of these machine learning algorithms:
|Financial Forecasting, Sales Prediction
|Simple, Interpretability, Assumption of Linearity
|Limited to Linear Relationships, Sensitivity to Outliers
|Medical Diagnosis, Credit Scoring
|Intuitive Decision-Making, Handling Nonlinear Relationships
|Prone to Overfitting, Lack of Robustness
|Customer Segmentation, Anomaly Detection
|Efficient and Scalable, Requires Specifying Number of Clusters
|Sensitive to Initial Centroid Positions
|Image Recognition, Language Translation
|Capacity for Complex Tasks, Requires Large Datasets
|Black Box Nature, Computational Intensity
The Role of Neural Networks in Machine Learning
Neural networks, the building blocks of deep learning, play a pivotal role in advancing the capabilities of machine learning. Understanding the intricacies of neural networks is essential for anyone seeking to harness the power of deep learning.
Anatomy of Neural Networks
Neural networks consist of layers of interconnected nodes, each layer contributing to the extraction and transformation of features. The three main types of layers are the input layer, hidden layers, and output layer. The input layer receives data, hidden layers process it through weighted connections, and the output layer produces the final result.
Types of Neural Networks
Feedforward Neural Networks
In feedforward neural networks, information travels in one direction—from the input layer to the output layer. These networks are used for tasks like image recognition and classification.
Recurrent Neural Networks (RNNs)
RNNs have connections that form a cycle, allowing them to retain information over sequential data. They are suitable for tasks involving time-series data, natural language processing, and speech recognition.
Convolutional Neural Networks (CNNs)
CNNs are designed for tasks involving grid-like data, such as images. They use convolutional layers to extract hierarchical features, making them effective for image classification and object detection.
Training Neural Networks
Training a neural network involves adjusting its weights and biases to minimize the difference between predicted and actual outcomes. This process, known as backpropagation, uses optimization algorithms like gradient descent to iteratively update the model’s parameters.
Challenges and Considerations
Neural networks, especially deep ones, are prone to overfitting, where the model performs well on training data but poorly on new, unseen data. Techniques like dropout and regularization help mitigate overfitting.
The complex nature of neural networks makes them challenging to interpret. Researchers are actively working on developing techniques to enhance the interpretability of these models for increased trust and transparency.
Applications of Neural Networks
Neural networks find applications in a myriad of fields:
- Image Recognition: Identifying objects in images or videos.
- Natural Language Processing: Understanding and generating human language.
- Speech Synthesis: Creating synthetic voices for virtual assistants.
Understanding the role of neural networks provides a foundation for exploring the vast possibilities of deep learning in machine learning applications. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817819.93/warc/CC-MAIN-20240421194551-20240421224551-00790.warc.gz | CC-MAIN-2024-18 | 21,523 | 143 |
https://www.karkidi.com/Find-Jobs/Category/executive-director-data-science-jobs | code | Irvine, CA, USA
15 May 2021
Allergan Data Labs is on a mission to transform the Allergan Aesthetics business at AbbVie, the fourth largest pharmaceutical company in the world. Allergan Aesthetics brands are iconic...
Python Programming,Scala Programming,C++,Java Programming,Machine learning techniques,Data science techniques,SQL,Apache Hadoop,MapReduce,TensorFlow,PyTorch,R Programming,Statistics,Biostatistics | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00142.warc.gz | CC-MAIN-2022-33 | 412 | 4 |
https://techcommunity.microsoft.com/t5/azure-sql-blog/configure-your-tempdb-max-size-in-azure-sql-managed-instance/ba-p/3715655 | code | Today we are pleased to announce a new option to configure your TempDB max size. As expected, your new TempDB configuration will persist upon a server restart, an instance update management operation or a failover.
What are TempDB size, used space and free space?
TempDB size is the sum of the file sizes of all TempDB files. A TempDB file size is an allocated (zeroed) space for that TempDB file. Initial file size for all TempDB files is 16 MB. That is the size all TempDB files are set to upon a restart, a failover, or an instance update. Every time a TempDB data file’s used space reaches the file size, all TempDB data files auto-grow by their configured (or default) growth increments. Similarly, every time the TempDB log file’s used space reaches the log file size, the log file auto-grows by its configured (or default) growth increment.
TempDB used space is the sum of the used spaces of all TempDB files. A TempDB file used space is equal to the part of that TempDB file size that is occupied with non-zero information.
The sum of TempDB used space and TempDB free space is equal to the TempDB size.
Find your TempDB size, used space and free space
Run the following script to get your used TempDB data files’ space, free TempDB data files’ space and TempDB data files’ size:
SELECT SUM((allocated_extent_page_count)*1.0/128) AS TempDB_used_data_space_inMB,
SUM((unallocated_extent_page_count)*1.0/128) AS TempDB_free_data_space_inMB,
SUM(total_page_count*1.0/128) AS TempDB_data_size_inMB
Run the following script to get your used TempDB log file’s space, free TempDB log file’s space and TempDB log file’s size:
SELECT used_log_space_in_bytes*1.0/1024/1024 AS TempDB_used_log_space_inMB,
(total_log_size_in_bytes- used_log_space_in_bytes)*1.0/1024/1024 AS TempDB_free_log_space_inMB,
total_log_size_in_bytes*1.0/1024/1024 AS TempDB_log_size_inMB
TempDB size is equal to the sum of TempDB data files’ size (TempDB_data_size_inMB) and TempDB log file’s size (TempDB_log_size_inMB). In our example TempDB size is 208 MB (192 MB + 16 MB).
To find the TempDB size in a quicker way, run:
SELECT (SUM(size)*1.0/128) AS TempDB_size_InMB FROM sys.database_files
Go to Object Explorer; expand Databases; expand System Databases; right-click on tempdb database; click on the Properties. This will bring up the following screen where you can see TempDB Size.
What is TempDB max size?
TempDB max size is the limit after which TempDB cannot further grow. TempDB max size has its
Technical limitations: In General Purpose service tier, the TempDB max size is technically limited to 24 GB/vCore (96 - 1,920 GB) and TempDB log file is technically limited to 120 GB. In Business Critical service tier, TempDB competes with other databases for the resources, meaning that the reserved storage is shared between the TempDB and other databases. The max size of the TempDB log file in Business Critical service tier is technically limited to 2TB.
Manually imposed limitations[NEW!]: you can now configure the maximum size of TempDB, you can do it in the same manner as on SQL Server on premises; by changing the max sizes of TempDB files.
NOTE! The TempDB files will grow until they hit the more restricted of the two limitations: the technical and the imposed limitation per file.
The default max size for all TempDB data files on the new SQL Managed Instances is -1 which stands for unlimited. The default max size for TempDB log file is 120 GB on the General Purpose managed instance and 2 TB on the Business Critical managed instances.
Find your TempDB max size
Run the following script to get the max sizes of the TempDB files:
SELECT name, max_size FROM sys.database_files
Go to Object Explorer; expand Databases; expand System Databases; right-click on tempdb database; click on the Properties. This will bring up the following screen where you can see TempDB files’ Max sizes by selecting a page Files.
You can now configure the maximum size of TempDB
You can now configure the maximum size of TempDB in accordance with your workload, in the same manner as on SQL Server on premises; by changing the max sizes of TempDB files.
RECOMMENDATION! We strongly suggest setting the maximum size for all the Temp DB data files to be the same, because the round robin algorithm favors allocations in files with more free space and the system may take a long time to rebalance. Dividing TempDB into data files of equal size provides a high degree of parallel efficiency in operations that use TempDB .
Go to Object Explorer; expand Databases; expand System Databases; right-click on tempdb database; click on the Properties. Select Files page and click on the “…” to edit “Autogrowth / Maxsize”. This will bring up another screen where you can change the Maximum file size for the TempDB file.After changing the TempDB max sizes for every TempDB file, this is our new TempDB files layout:
Summary table of all TempDB configurations
TempDB configurations are persisted after a restart, an instance update management operation or a failover
TempDB is always re-created as an empty database when the instance restarts or fails over and any changes made in TempDB are not preserved in these situations. However, TempDB configuration settings are saved such that the TempDB number of files, their growth increments and their max file sizes stay the same after a restart, an instance update, or a failover.
In this article, we highlighted the differences between TempDB max size, TempDB size and TempDB used space in SQL Managed Instance, and we introduced the new TempDB configuration, a change of the TempDB max size. Thank you for reading and enjoy better performance of your SQL Managed Instance with the customized TempDB. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.26/warc/CC-MAIN-20240227121318-20240227151318-00888.warc.gz | CC-MAIN-2024-10 | 5,743 | 36 |
http://milasoft.net/virtual-pc/microsoft-virtual-pc-windows-7-32-bit.html | code | Microsoft Virtual Pc Windows 7 32 Bit
Dynamic screen resolution: The screen resolution of the guest operating system can be changed by simply resizing the window in which it is running. Yükleniyor... Çalışıyor... acommit.ch. The Integration features of this software comprise `Mouse cursor sharing,` `Host-initiated shutdown,` `Time synchronization,` `Process responsiveness monitoring,` `Dynamic screen resolution` as well as Audio / Clipboard / Printer / Smart card this contact form
In the settings windows, you can specify where the installation disk for your new operating system is located to install it in your new virtual machine. Given the prevalence of 64-bit OS nowadays, Microsoft should have designed Virtual PC to support 64-bit Guest OS, if the host OS is 64 bits. All Rights Reserved Overview Specs Windows 10 Windows Virtual PC (32-bit) DriverMax Andy OS Driver Booster Driver Easy SlimDrivers Free Microsoft Office 2010 Service Pack 1 (32-Bit) Windows USB/DVD Download Tool I really want to know what compelling reasons there are to use Microsoft's Virtual PC over other (IMO "better") VM clients like VBox or even VMWare Player. https://www.microsoft.com/en-us/download/details.aspx?id=3702
Virtual Pc Windows 10
September 11, 2011 Ashok this tutorial works fine , i noticed that it needs the hardware assisted accleration feature , which i was unable to do it , later writing my Hakkında Basın Telif hakkı İçerik Oluşturucular Reklam Verme Geliştiriciler +YouTube Şartlar Gizlilik Politika ve Güvenlik Geri bildirim gönder Yeni bir şeyler deneyin! Going to recommend to everyone. Thank You for Submitting Your Review, !
February 19, 2007. Bu tercihi aşağıdan değiştirebilirsiniz. Retrieved July 10, 2009. ^ Oiaga, Marius (June 2, 2007). "Install Windows Vista Ultimate in Windows Vista – Vista Virtualization Guidelines". Windows Virtual Pc Download Microsoft then replaced this with Hyper-V.
A Virtual Switch available in Virtual PC version 4.1 or earlier allows adding multiple network adapters. Blogs.msdn.com. Close Available languages English Spanish German French Chinese Trad Italian Japanese Portuguese Russian Close Limitations Close Versions of Microsoft Virtual PC All versions of Microsoft Virtual PC Version License Language O.S. March 29, 2011 Ann Onymous I seem to have hit a nerve since my last comment was removed.
MSDN Blogs. Download Virtual Pc For Windows 10 Microsoft seems to be strong-arming everyone into also getting their XP Mode package when getting their VM client (I know it is possible). Retrieved 16 June 2013. ^ "PC in a Mac". I had used Virtual Box in the past.
Virtual Pc Download
We know how important it is to stay safe online so FileHippo is using virus scanning technology provided by Avira to help ensure that all downloads on FileHippo are safe. https://microsoft-virtual-pc.en.softonic.com/ Microsoft TechNet. Virtual Pc Windows 10 So, knowing Microsoft's somewhat draconian terms with regard to their Virtual Machine software (which should really be no surprise) and their limitations on what host OS you can actually use it Virtual Pc Software Softpedia and the Softpedia logo are registered trademarks of SoftNews NET SRL.
pp.133–136. ^ "USB Architecture in Windows Virtual PC". http://milasoft.net/virtual-pc/microsoft-virtual-pc-windows-xp-32-bit.html Microsoft. February 20, 2009. Install Instructions Windows Virtual PC installation: Upgrade from Windows Virtual PC Beta to Windows Virtual PC RC is not supported. Microsoft Virtual Pc 2007
May 4, 2009. Microsoft TechNet. Oturum aç Paylaş Daha fazla Bildir Videoyu bildirmeniz mi gerekiyor? navigate here SOFTPEDIA DESKTOP Windows Windows Games Drivers Mac Linux MOBILE Android APK Phones Tablets WEB Webscripts Web Browsers NEWS Softpedia > Windows > System > OS Enhancements >Windows Virtual PC GO Windows
May 4, 2009. Virtual Pc Windows 8 Microsoft Download Center. Which VM requires the least resources, and which VM is the simplest to use ?
Britec09 54.103 görüntüleme 10:10 Virtual PC 2007 Setup - Süre: 6:34.
Uygunsuz içeriği bildirmek için oturum açın. Microsoft corporation. I haven't actually downloaded yet, because I'm waiting to see the answers of why this is better than Virtual Box or VMWare Player, both free alternatives. Windows 7 Virtual Machine On Windows 10 Retrieved July 10, 2009. ^ "Windows 7 Release Candidate: FAQ".
Virtual PC Guy's WebLog. Virtual PC is perfect for any scenario in which you need to support multiple operating systems, whether you use it for tech support, legacy application support, training, or just for consolidating Softpedia. his comment is here Please disable your ad-blocker to continue using FileHippo.com and support this service. - FileHippo team How to disable Ad-block on FileHippo 1 Click on the Ad-block icon located on your toolbar
Retrieved July 10, 2009. and then VMware is pretty much as simple but quicker, and get to fully customize it before you create it, and Vbox, well I haven't played with it much but its Yükleniyor... Microsoft for Mac – Australian website.
Get geeky trivia, fun facts, and much more. Support of Apple System 7.5 was dropped in version 3. This is going to be explained further. Read Less...
Redmond, WA: Microsoft Press. This disk space is only a starting point in determining how much disk space you will need New in Windows Virtual PC 6.1.7234.0 RC: Granular drive sharing: In the settings on | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815500.61/warc/CC-MAIN-20180224073111-20180224093111-00775.warc.gz | CC-MAIN-2018-09 | 5,439 | 19 |
http://cahuntered.gq/blog2545-install-vmware-tools-centos-6-7-yum.html | code | install vmware tools centos 6.7 yum
Install VMware Workstation 11 for Linux CentOS The Tutorial Shows You Step-by-Step How-to Install VMwaresu -c "yum install gcc kernel-headers kernel-devel". Then Start the VMware-Workstation Installation Wizard.How-to Install the VMWare-Tools on VMware Workstation Guests I was working on a CentOS 5.5 install but the difference from standard Redhat should be minimal.Create the repository configuration file (/etc/yum.repos.d/vmware-tools.repo) and add the folowing: [ vmware-tools] nameVMware Tools baseurlhttp However, usually CentOS virtual machines have their VMware tools installed via the local binaries available from vSphere. Its possible to configure Centos to install VMware Tools via yum. First of all, we need to add the VMware repository. yum install vmware-tools-services vmware-tools-plugins-guestInfo vmware- tools-plugins-powerOps. You will be prompted to import GPG keys and to confirm the packages to install, and a few requiredUsing these three steps, installing VMware Tools on CentOS with yum is quick and easy. Firstly: When installing VMware-tools the first time around I was getting errors with. Initctl: Job failed to start Unable to start services for VMware Tools. Theres a few internet fingers pointed at the ThinPrint setup Im not using printing at all but Im going to speculate that an install of CUPS and a few otherbegan as nails. The current brand offerings include female grooming products, such as beauty tools, waxing and airbrush systems.
No Animal Testing There are many important considerations to keep Introduction. Often when managing a large number of systems you want to manage all software installs the same way. So when it comes to VMware Tools you may not want to follow the official instructions but instead install using yum, especially if youre automating a large number of headless systems. yum install open-vm-tools. cd /tmp tar xzvf VMwareTools-9.4.0-1280544.tar.gz. 6.
Go to extracted folder, vmware-tools-distrib : cd vmware-tools-distrib ls bin doc etc FILES INSTALL installer lib vmware-install.pl. Create the file /etc/yum.repos.d/vmware-tools.repo and enter the following content for Red Hat Enterprise 5 and CentOS 5 in x64-bit mode. yum install vmware-open-vm-tools-nox. When running CentOS on your VMWare infrastructure you have requirement to install vmware tools or open VM Tools. Here is a guide on performing the OpenVM Tools install.yum remove vmware-tools-esx-nox. yum install open-vm-tools. Do not forget to reboot the server after finishing the installation. If you are going to build a Template based on CentOS 7, you also need to install the deployPkg Tools, see for more information this document provided by VMware. Cant install MySQL-python [CentOS]. Why python yum module unable to scan for obsolete patches? VMWare Clarity Dark theme Select input.yum install perl. Then, run the VMWare Installer: cd vmware-tools-distrib ./ vmware-install.pl. Note that if youre on CentOS 7, you may need to install the package net- tools first, because VMWare Tools seems to REQUIRE the program ifconfig which is not present by default. The command yum install net-tools should fix that. Альтернатива установки VMware Tools на Cent OS 6.4 tools centos 7 minimal su 101disegnidacolorare.com. CentOS perl . yum install -y perl.Server > VMware ESXi . Install VMware Tools in CentOS (0). Yum is tha package manager used to install, remove and update sofware for linux distributions based on Red Hat, like Centos and Fedora.VMware Tools Installation Guide For Operating System Specific Packages ESX/ESXi 4. Published on Mar 13, 2011. Installing VMware tools on CENTOS.Installing VMWare Tools On a Windows VM Using a VSphere Client - Duration: 4:44. LogicalITS 10,535 views. After CentOS 7 installation, to install Open VMware Tools, using root privileges, run the command: yum install open-vm-tools. For other types of Base Environments, to install Open VMware Tools, select the Guest Agents Add-On. This documents contains VMware-Tools setup instructions for ESX 4 and ESX 5 on Centos 6 32-bit and 64-bit. Create the repository file for 64-bit packages /etc/ yum.repos.d/VMWare-Tools.repo. echo -e "[vmware-tools]n nameVMware Toolsn baseurl Install VMware Tools for a nested ESXi. vsphere OSP tools for rhel/ centos 6x error downloading. Workstatin 11 Vmware tool on Centos 7 guest.VMwareTools-10.0.1-3160059 VMware Workstation 12 Pro - 12.0.1 build-3160714 CentOS Release 6.7 (Final) - Kernel You will need to change the ESXi version (red) and CentOS base (blue) to match what you run: echo "[ vmware-tools]" >> /etc/yum.repos.d/vmware-tools.repo echo "nameVMware Tools"3. Install all portions of the VMware Tools: yum -y install vmware-tools . For CentOS, I recommend following the VMware Tools Installation Guide for Operating System Specific Packages to get your VMware Tools instead of just running it from vSphere Client.yum install vmware-tools-esx-kmods vmware-tools-esx. Install the VMware tools core package with dependencies: yum install vmware-tools-core. From now on, any time the WMware tools for ESXi 5.1 are updated, you can update them via the usual yum update method. If we installed previously VMware tools via the local vSphere, we will need to remove them using /usr/bin/ vmware/vmware-uninstall-tools.pl. We can now install VMware tools via yum Initiate the VMware tools install on your CentOS 6.
2 VM. Open a SSH session to your VM and copy/paste thisCentOS 7 add new yum repo. CentOS 7 check disk space. CentOS 7 install GCC by YUM. 1) After finishing your CentOS OS installation, you will see a Pop Up in the VM console below. Click on Install Tools.7) Go to the folder where you copied the VMWare Tools and extract the Tar file. tar -zxf VMwareTools-(versionnumber).tar.gz. Im folgenden Blog Artikel zeige ich euch, wie man die VMware Tools auf einem Linux System, genauer gesagt unter CentOS 6.5 installiert. Zuerst muss der Perl Interpreter installiert werden. yum -y install perl. yum -y install vmware-tools-esx-nox. In the vSphere client, the VMware Tools should show up as Installed almost immediately. Tags: CentOS EL RHEL VMware. yum install vmware-tools-esx-kmods vmware-tools-esx-nox.How to Modify HostName RHEL OEL CentOS 6. Next PostNext Setup NFS from Oracle Enterprise Linux 6 basic install. 4.11 Installing the VMware Tools. and Citrix XenDesktop and XenApp, Windows and Linux operating systems, and related o CentOS 6.7 7.2 o SUSE documentation for information on manually creating a template VM with the OS of your choice. See Installing VMware Tools from the Command Line with the Tar Installer. CentOS 6 documentation covers information on how to install the operating system in aLogic mirrors that are hosted within the Azure datacenters, then replace the /etc/ yum. Cent. OS- Base. repo file with the following repositories. Location: Brighton, UK. Re: install vmware tools on centos.Theres no need to download epel-release files any more - its now in the CentOS extras repo so you can just yum --enablerepoextras install epel-release - extras is enabled by default but the --enablerepo caters for those that have c) If you have VMwareTools-5.5.3-34685.tar.gz on disk Show answer.One Comment pada Installing VMware Tools in CentOS 5. 60 thoughts on CentOS 6.2 VMware Tools Install the Easy Way.I had no network to be able to install perl! So, before you can yum, you need to make sure the network adapter you choose for your vm is VMXNET3 and not E1000 which I had . cp -p VMwareTools-10.0.6-3620759.tar.gz /tmp.Labels: install open-vm-tools centos 7, install vmware tools centos 6, install vmware tools centos 7 command line, Linux, VMware, yum install vmware tools. Step 15: Click close the button to finish the installation. Step 16: Start the VMware Workstation ( Applications —> System Tools ——> VMware Workstation).Thats all! You have successfully installed VMware Workstation on CentOS 6 / RHEL 6. 3. VMware Tools without graphics components. vmwaretools esxkmodskerneltype.yum install vmwaretoolsesxkmods vmwaretoolsesxnox. yum install perl. Then, run the VMWare Installer: cd vmware-tools-distrib ./ vmware-install.pl.I managed to install VMware Tools successfully (no errors) in CentOS 6.7 minimal (and no GUI) on VMware Workstation 12 this way OSTo install KDE Desktop, type this: yum groupinstall. The site for people who want to establish the Network Server with CentOS, Ubuntu, Fedora, Debian. Centos 6 Install Vmware Tools Linux Red. CentOS Linux The CentOS Project. Official page says that tool is broken for Centos 6.7, form more information read this https-1. yum No package VirtualBox-5.0 available. on a MINIMAL installation of CentOS 7? 1. Error: install exited abnormally [1/1] while installing CentOS -5.4. Right click the VM and choose Tools, then select Install VMware Tools .Log into VMware Guest you need to install VMware Tools to. Install the kernel-devel, gcc, dracut, make, perl and eject packages using yum Installing VMware Tools on Centos 6.x is quite simple, here is a short reminder on how to do it. The examples below are for the base version of ESXi 5.1 and 5.5.yum -y install vmware-tools-esx-nox. Before Installing VMware Tools Make Sure you have installed following RPMs. kernel-devel, make, gcc, perl, kernel-headers. Install using yum ch /vm-tools. Extract file. tar -xzvf VMwareTools-8.4.5-324285.tar.gz. CentOS 6 7: "VMWare open sourced their guest support tools in 2013, and now recommend using the version distributed by the OS vendor."yum --enablerepoextras install epel-release". and. "yum install open-vm -tools". After CentOS 7 installation, to install Open VMware Tools, using root privileges, run the command: yum install open-vm-tools. For other types of Base Environments, to install Open VMware Tools, select the Guest Agents Add-On. I do a ls in the directory and it shows the installer in there. [rootLinuxTest01 vmware-tools-distrib] ls bin doc etc FILES INSTALL installer lib vmware- install.pl The only step I didnt do in this article is the prereq. first step because Yum is always trying to connect to the internet and my Centos 7 box is To net-tools package install run the command yum install net-tools. 8. VMWare Tools Installation Guide CentOS 7 7 | P a g e Issue bad interpreter Error: When you run the installer, /usr/bin/perl: bad interpreter: No such file or directory. [rootpacker-centos packer-templates] yum install -y tree Loaded plugins: fastestmirror Setting up Install Process Loading mirror speeds fromIs this what you centos67: want? [yes] centos67: The installation of VMware Tools 9.9.4 build-3193940 for Linux completed centos67: successfully. For step 14 instead of end vmware install go to removable devices > cd > disconnect. reboot again to check if the service is running properly enter: /etc/ vmware-tools/services.sh status Specifications VMwareTools-10.0.1-3160059 VMware Workstation 12 Pro - 12.0.1 build-3160714 CentOS | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936981.24/warc/CC-MAIN-20180419150012-20180419170012-00479.warc.gz | CC-MAIN-2018-17 | 10,981 | 5 |
https://appliedscala.com/blog/2021/readable-scala-style/ | code | The following is a somewhat opinionated style guide targeted at people primarily with Java background who want to benefit from using the mainstream (also known as
Future-based) Scala stack and do so while optimizing their reward to effort ratio. This guide will be well suited for teams that need a reasonable set of simple guidelines to follow, especially at the beginning, until everyone develops good intuition for Scala.
Signal your intention with crystal clear names
When it comes to programming, it's better to be wrong than vague. This implies that you should document your intent in code as clear as possible. For example, if you have a choice between calling a variable
documentId and you know that you're dealing with documents here, choose the latter. Generic names such as
correlationId are OK in generic code, but using them in business logic is usually a bad idea. Likewise, avoid using data types as names, i.e it's better to use
created rather than
Minimize the LOC measure by leaning towards longer lines
This is somewhat controversial, but when it comes to reading the code, I much prefer something that doesn't spread multiple screens. Of course, this doesn't mean that you should try to fit everything into a single line. Rather, try to group operations so that they make up a single logical action. For example, if you have a collection of elements and you need to get something from it through a sequence of method calls, feel free to keep them on the same line. If the level of abstraction throughout the method is as consistent as it should be, every line can be viewed as a single finished thought.
Strive for methods that mostly consist of single line val assignments
If a method mostly consists of
val assignments, it forms a very easy to follow structure. Provided that you named your values and methods reasonably well, your Scala code should read like English. Occasional branching is OK, but it shouldn't distract the reader from the main idea, and the main flow should be easily evident.
Keep nesting to a reasonable level
Scala offers great many tools to avoid excessive nesting. Rather than building a pyramid of
flatMaps, we can always write a
for-expression instead. Instead of writing multiple nested
if-expressions, we can wrap everything in
Try and then use a Failure with an application-specific exception to signal an erroneous condition. If we're only interested in a happy path and don't care which operation caused an error, we, yet again, can use a
Do not expose more than necessary
Even though Scala uses
public visibility by default, it's still better to limit it whenever possible as you would do in Java or C#. If a method is only used by one other method, consider moving it inside. If a method or constant are only used within the class, they must be made
private. If a public utility method is only used in one place, it's usually better to move it to where it's needed and again, make it
private. If a public utility method from a common module is only used within one service, it must be moved out of the common module. Also, check out Neal Ford's Stories Every Developer Should Know to learn about the dangers of inappropriate code reuse.
Do not keep mutable state in service classes
Most service classes are stateless collections of methods grouped together, because they perform related business functions and probably have similar or overlapping dependencies. Because they are stateless, they can safely be instantiated as
@Singletons without any concern about concurrency. This is exactly the opposite of what classical object-oriented programming tells us to do, and this is perfectly fine as we're not doing OOP here.
Avoid putting logic in case classes
Whereas service classes should contain only methods but not state, case classes should contain state but not logic. Again, this is in direct contrast to what OOP tells us to do. Also, keep in mind that if the case class in question resides in the
commons package, everything that was said about keeping exposure to a minimum is still relevant here.
Always destructure tuples
It's easy enough to create a tuple, but it's not always easy to read the code that is full of them. If you're calling
zipWithIndex, for example, it's better to destructure the pair immediately thereby avoiding cryptic calls to
._2. Likewise, never return a tuple from a method, especially
public. If the need arises, use a properly named case class (possibly, defined within a service class to limit its scope).
Use for expressions for monadic sequencing only
for expressions can be used to express methods such as
foreach, so in a way they behave like a mixture of the
do notation from Haskell and list comprehensions from Python. In practice,
foreach are better left unsugared, especially when collections are involved. On the other hand, sequentializing monadic types like
IO is a great way to make code more readable and reduce nesting.
Use val whenever possible and only resort to var if necessary
Unlike Haskell, Scala has the
var keyword and allows developers to define mutable variables. However, in an expression-centric language they are almost never needed. For the most part, you should only consider using a method-local
var in a very rare situation of performance optimization. Everything else is better done with
Consider replacing while loops with @tailrec functions
Most iterations in Scala can be expressed in terms of library functions. However, sometimes there is a need to exit the iteration urgently and still return a result. This can be done either imperatively with
var and breaks or with recursion. In many cases, recursion results in a more understandable code, so take time and try to implement it, but make sure that it can be annotated with
Prefer enumeratum to the standard Enumeration
The standard pattern for implementing
enum in Scala is described on StackOverflow and often used by Scala beginners. This approach is very limited and in fact inferior to the standard Java
enum in almost every way. Always prefer enumeratum as a way of implementing enums in Scala 2.X, and only resort to using
Enumeration to support legacy systems.
Consider using union enums with Either to signal expected errors
Java has the notion of checked exceptions, which is mostly viewed by the Java community as a design mistake. However, this concept allows developers to differentiate between expected and runtime errors. Even though all exceptions in Scala are essentially
RuntimeException's, in many situations it is necessary to make expected errors part of the API. If you're not using advanced libraries such as ZIO, the best strategy is to use
Either with a custom error type as
Left value and normal return value as
Right. The custom error type should ideally be an algebraic data type, i.e should consist of non-intersecting concrete values. Passing
Strings with an arbitrary error message in English as
Left is almost always a bad idea.
Use Option only when the absence of value is expected
If properly used,
Option completely eliminates the problem of
NullPointerException in Scala code. Ideally, it should be used to signal to the caller that the value may or may not be present. Since
Option is effectively part of the API, it forces the caller to make sure that both possibilities are covered. For example,
findById methods of a low-level repository might return an
Option to signal that the value might not be present in the database. However, if the entity is expected to exist for some other higher-level operation, its absence should be signalled with a failure rather than
Be mindful of where your Future is running
Most methods on
Future require an
ExecutionContext that specifies in which thread pool this particular piece of code is going to run. Often times, people simply define or import a global
implicit context and forget about the problem altogether. Just as often this creates a situation when all code, including blocking and long-lasting operations, is running on a single CPU thread pool depriving other tasks from live threads.
Avoid accidental exception swallowing
Many monadic types such as
Try catch exceptions internally, and most of the time, this is exactly what you want. However, it's also very easy to accidentally swallow an exception completely thereby depriving the caller from ever knowing that an error took place. Always be careful with error propagation when using methods like
recover and be doubly suspicious when you encounter a nested
Keep inheritance use to an absolute minimum
Using inheritance to avoid code duplication is a terrible idea in Java, and it's no different in Scala. If there's some logic that can be expressed as a pure function and reused by other services from the same module, it's better to extract it as a helper method. Likewise, if the functionality of a certain class needs to be extended, use composition as GoF suggested in "Design Patterns" and Joshua Bloch re-asserted in "Effective Java".
Avoid unnecessary method overloading
Method overloading in Scala works exactly the same way as in Java, but this doesn't mean that it should be used just as often. While Java has to distinguish between primitives and reference types, Scala doesn't have this problem, and overloading is usually a bad idea as it makes code less readable. In general, there are many reasons to avoid overloading in Java and in Scala.
Use implicit parameters sparingly
implicit keyword is one of the defining features of Scala 2.X, and there are several well-known implicit-specific "design patterns" such as type classes, type tags, extension methods etc. However, when it comes to implicit parameters in the context of the standard Scala, there's only one worth considering: Implicit Contexts. This pattern is particularly useful when you need to drag a single value of a specific type (so-called "context") through a series of method calls, and it sometimes may be thought of as a modern day version of a ThreadLocal variable. In most other cases, prefer explicit parameters.
Use lazy values sparingly
Marking value as
lazy means that the initialization of a value will be postponed until the value is first used. One use case for
lazy is a configuration parameter that is needed for only of a subset of functions defined by the service class. Another use case is compile-time dependency injection with libraries like MacWire. When it comes to local values,
lazy makes code more difficult to reason about and should be avoided.
Use default parameters sparingly
Default parameters allow developers to skip passing them when calling a method. This saves a couple of keystrokes, but inevitably makes code more difficult to understand. When a method uses multiple default parameters, it's impossible to tell what will actually happen without looking at the method implementation. Default parameters also make cross-module refactoring much harder and consequently invite insidious and hard-to-catch bugs. In general, there are better ways to make API safer and more convenient to use, for example, applying the Factory Method design pattern.
Take advantage of the type system
Scala has a great variety of tools for domain modeling, and you should strive to use them often. In particular, you should prefer
case classes to tuples, sum types or similarly modelled enumerations to
String constants. Also, it's usually a good idea to model different states of your domain objects as separate types as it is usually recommended by the Functional DDD community. Consequently, try not to write methods that take and return values of the same type as they almost always require the reader to look at the implementation to understand what they do. For more details, check out Scott Wlaschin's talk Domain Modeling Made Functional.
Know when to use parentheses in methods
Parameterless methods can be declared with or without parentheses, but the decision is never arbitrary. For accessor-like methods (think "getters" from Java) which are also pure, parentheses shouldn't be used. This rule makes sense because in Scala methods and values share the same scope and can override each other. For pretty much everything else, parentheses should be used.
Use curly braces for non-trivial lambda expressions
If a function takes exactly one parameter, the develop calling it has a choice of using either parentheses or curly braces. This choice, however, is almost never arbitrary. If the parameter is not a function itself, use parentheses. If this parameter is a function, and you have a non-trivial lambda to pass, use curly braces with the body on the next line. If you're passing a method reference or a trivial lambda with underscores, use parentheses and keep everything on the same line.
Avoid infix notation in regular method calls
In Scala, operators such as
:: are defined as regular methods such as
flatMap. This means that it's possible (but not recommended) to use
1.+(2) instead of more familiar
1 + 2. This also means that it's possible (and also not recommended) to use
xs map increment instead of
xs.map(increment). When dealing with regular methods outside of a specific DSL context, always prefer the latter approach and consider using automatic code formatters to enforce this rule.
Do not wrap for expressions in parentheses
The following rule is borrowed from PayPal's style guide. For expressions should not be wrapped in parentheses in order to chain them with
andThen, etc. Instead, extract the result of the for expression into its own variable and perform additional operations on that. This rule also fits nicely into Kent Beck's idea about giving meaningful names to intermediate values.
Maximize percentage of pure code
Even though the standard Scala with
Future is intrinsically side-effecting, everyone should strive to limit the amount of side-effecting code and ideally push it to the edge of the program. While the latter might be challenging to do in a typically-procedural layered architecture with repositories as low-level dependencies, one can always try to push side effects to the controller level and make the rest of the code return pure descriptions of what needs to be done.
When in doubt, prioritize readability
Sometimes rules may be contradicting, and it may not be obvious which one should win. When this happens, always consider the overall readability of the code first. If some unique Scala feature makes code easier to understand and change, consider using it. If not, resort to more traditional approaches from classical books like "Code Complete", "Clean Code" and "The Pragmatic Programmer". | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00442.warc.gz | CC-MAIN-2022-27 | 14,585 | 113 |
https://reactos.org/pipermail/ros-dev/2005-October/005371.html | code | [ros-dev] A call for 0.2.8
rob at koepferl.de
Wed Oct 12 00:32:36 CEST 2005
This would still work out with my plans.
I'll be able to do this release. BTW I intended to document and
rbuildify (or the like) the process
>On 10/11/05, Casper Hornstrup <ch at csh-consult.dk> wrote:
>>You can always branch back in time. Subversion is a time machine ;-)
>Yup. I propose 18406 as the branch point. Do we need to hunt down
>RobertK or is the release process documented anywhere?
><arty> don't question it ... it's clearly an optimization
>Ros-dev mailing list
>Ros-dev at reactos.org
More information about the Ros-dev | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256040.41/warc/CC-MAIN-20190520142005-20190520164005-00499.warc.gz | CC-MAIN-2019-22 | 611 | 14 |
https://www.mankier.com/3/TPM_IO_Hash_Start | code | TPM_IO_Hash_Start — indicate the beginging of a TPM TIS hash operation
TPM_IO_Hash_Data — hash the provided data
TPM_IO_Hash_End — indicate the end of a TPM TIS hash operation
TPM library (libtpms, -ltpms)
TPM_RESULT TPM_IO_Hash_Data(const unsigned char *data,
The TPM_IO_Hash_Start() function can be used by an implementation of the TPM TIS hardware interface to indicate the beginning of a hash operation. Following the TPM TIS interface specification it resets several PCRs and terminates existing transport sessions. The TPM_IO_Hash_Data() function is used to send the data to be hashed to the TPM. The TPM_IO_Hash_End() function calculates the final hash and stores it in the locality 4 PCR. The 3 functions must be called in the order they were explained.
The implementation of the above functions handles all TPM-internal actions such as the setting and clearing of permanent flags and PCRs and the calculation of the hash. Any functionality related to the TPM's TIS interface and the handling of flags, locality and state has to be implemented by the caller.
The function completed successfully.
The TPM_IO_Hash_Start() function was called before the TPM received a TPM_Startup command.
The TPM_IO_Hash_Data() or TPM_IO_Hash_End() functions were called before the TPM_IO_Hash_Start() function.
For a complete list of TPM error codes please consult the include file libtpms/tpm_error.h
TPMLIB_MainInit(3), TPMLIB_Terminate(3), TPMLIB_RegisterCallbacks(3), TPMLIB_Process(3)
The man pages TPM_IO_Hash_Data(3) and TPM_IO_Hash_End(3) are aliases of TPM_IO_Hash_Start(3). | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710765.76/warc/CC-MAIN-20221130160457-20221130190457-00722.warc.gz | CC-MAIN-2022-49 | 1,580 | 13 |
https://forum.utorrent.com/topic/21273-always-get-timed-out-even-if-downloads-continue/ | code | itsonlydanny Posted June 13, 2007 Report Share Posted June 13, 2007 Hi there,I am using the latest stable/non-Beta version of uTorrent, and my (default) O/S is Windows Home w/SP 2 (and I live in the UK). For the rest of my hardware & (security) software specs see below. I recently moved - whether wisely or not - from Orange to Tiscali's Max Unlimited Account (ie, "up to" 8 Mbps, or supposedly "between" 6 to 8 Mbps). Initially, I had appalling connection/download speeds when using the 'obligatory' (and notorious, it seems) SpeedTouch ST-330 USB ADSL modem, as described here:http://www.tiscali.co.uk/forums/showthread.php?t=129676However, when they sent me a replacement Ethernet modem - the SpeedTouch 510 v6 - things definitely picked up (as also mentioned in the above forum post). Indeed, for a brief, glorious, minute or two I achieved the near dazzling speed of 7.54 Mbps - though the non-peak time (6-11pm apparently) 'average' is about 4.4 Mbps, while the peak time 'average' is a risible 0.2 Mbps (and seems to begin around 4pm rather than 6pm).According to uTorrent's SpeedGuide (just taken) my download speed is a mighty 2088 kb/s and my upload speed is 185 kb/s. As for 'thinkBroadband', they've just told me that my downloads are 4.4 Mbps and my uploads are 0.4 Mbps. As for the TweakTest on 'dslreports', here are the results: http://www.dslreports.com/tweakr/block:4698b76?service=dsl&speed=6000&os=winXP&via=winXPpppoeAnd here is my report from ProcessExplorer:"Process PID CPU Description Company NameSystem Idle Process 0 98.44 Interrupts n/a Hardware Interrupts DPCs n/a Deferred Procedure Calls System 4 smss.exe 1352 Windows NT Session Manager Microsoft Corporation csrss.exe 1500 Client Server Runtime Process Microsoft Corporation winlogon.exe 1524 Windows NT Logon Application Microsoft Corporation services.exe 1568 Services and Controller app Microsoft Corporation svchost.exe 1748 Generic Host Process for Win32 Services Microsoft Corporation svchost.exe 1808 Generic Host Process for Win32 Services Microsoft Corporation svchost.exe 1972 Generic Host Process for Win32 Services Microsoft Corporation wscntfy.exe 3608 Windows Security Center Notification App Microsoft Corporation svchost.exe 2036 Generic Host Process for Win32 Services Microsoft Corporation svchost.exe 424 Generic Host Process for Win32 Services Microsoft Corporation SABSVC.EXE 716 Super Ad Blocker Service SuperAdBlocker.com spoolsv.exe 816 Spooler SubSystem App Microsoft Corporation eEBSvc.exe 932 schedul2.exe 1140 Acronis Scheduler 2 Acronis avp.exe 1156 Kaspersky Anti-Virus Kaspersky Lab DkService.exe 1184 DKSERVICE.EXE Diskeeper Corporation eEBAgent.exe 1212 eEBAPI Status Agent SEIKO EPSON CORPORATION SAgent2.exe 1236 EPSON Printer Status Agent SEIKO EPSON CORPORATION E_S00RP2.EXE 1252 EPSON Status Monitor 3 SEIKO EPSON CORPORATION nTuneService.exe 1328 NVIDIA Access Manager NVIDIA nvsvc32.exe 1368 NVIDIA Driver Helper Service, Version 93.71 NVIDIA Corporation svchost.exe 1960 1.56 Generic Host Process for Win32 Services Microsoft Corporation wwSecure.exe 432 Washer Security Service Webroot Software, Inc. alg.exe 2476 Application Layer Gateway Service Microsoft Corporation lsass.exe 1580 LSA Shell (Export Version) Microsoft Corporationexplorer.exe 3540 Windows Explorer Microsoft Corporation csrss.exe 3676 jusched.exe 3784 Java Platform SE binary Sun Microsystems, Inc. rundll32.exe 3796 Run a DLL as an App Microsoft Corporation TrueImageMonitor.exe 3900 Acronis True Image Monitor Acronis TimounterMonitor.exe 3936 Monitor for Acronis True Image Backup Archive Explorer Acronis schedhlp.exe 3952 Acronis Scheduler Helper Acronis Panel.exe 3964 MouseDrv.exe 4032 5 Key Mouse Driver PS2USBKbdDrv.exe 4084 E_S10IC2.EXE 620 EPSON Status Monitor 3 SEIKO EPSON CORPORATION TMTray.exe 1944 TweakMASTER PRO Agent Hagel Technologies Ltd avp.exe 2028 Kaspersky Anti-Virus Kaspersky Lab ctfmon.exe 2172 CTF Loader Microsoft Corporation msmsgs.exe 2268 Windows Messenger Microsoft Corporation sgmain.exe 2536 SpywareGuard sgbhp.exe 2816 SG Browser Hijacking Protection procexp.exe 3236 Sysinternals Process Explorer Sysinternalsfirefox.exe 2960 Firefox Mozilla Corporation"My problem - after reading your Guides, FAQa, etc - is that when I'm downloading there always come a point when when all the torrents have the red mark next to them - that is, the message "Tracker Offline (timed out)". This does not mean I cannot download anything - it still comes down, sometimes at a relatively decent rate considering how slow my connection can be at times. But, presumably, as I'm always being "timed out", then I should be able to gain greater speeds .... yes?Of course, it could be that Tiscali is throttling my torrent connections - in which case, I suppose there is not much I can do about it (after all, I never had this "timed out", etc, problem when with Orange). But then again, it could be that my settings/configuration is pretty lousy. I've always got the green tick 'Network OK' symbol, and I followed all the instructions, etc, at 'portforwarding.com'. I almost certain that I've configured my Kasperksy Internet Security correctly, so uTorrent has full access, and so on. Perhaps I'm missing something? And, yes, I've Enabled Encryption.If someone's got any advice or suggestions for me, that would be great - as it gets a little bit dispiriting looking at a row of 'angry' red icons all day! (so to speak)all the best,itsonlydanny Link to comment Share on other sites More sharing options...
This topic is now archived and is closed to further replies. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00097.warc.gz | CC-MAIN-2023-14 | 5,576 | 2 |
https://devrant.com/rants/1214023/what-actual-pain-is-trying-to-install-cuda-on-windows-that-should-be-a-pain-ther | code | Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
-ANGRY-CLIENT-11282182dSame here. I will probably buy a computer or build one myself
-ANGRY-CLIENT-11282182dI've tried to do machine learning e.g. on my laptop.
It just takes way too much resources.
magicMirror4923182dwhy would you do that to yourself?
spin up an amazon machine when you need those.
ilikeglue1659182dI really wish to know why tensor flow doesn't work with opencl on amd cards...
Alice100325182dWhen you want to install something and waste time without knowing what your machine supports instead of looking for it right away.
BigETI50182dI don't believe a long lasting future in proprietary technology like CUDA. Open software designed for multi-purpose computation like OpenCL is the way to go. Also it can run code either on CPU, GPU or even on FPGA.
paranoidAndroid1012182dUse DA CLOUD. Google even offers tensor machines. That's their business model fir tensor flow.
Your Job Suck?
Take a quick quiz from Triplebyte to skip the job search hassles and jump to final interviews at hot tech firms
Get a Better Job | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211664.49/warc/CC-MAIN-20180817025907-20180817045907-00005.warc.gz | CC-MAIN-2018-34 | 1,159 | 13 |
https://forum.bubble.io/t/how-to-use-javascript-to-push-items-to-a-list-of-texts/269352 | code | I am building a plugin where I have an action that takes in an array of text and chunks it depending on some number which I’m also passing. Basically, the code works perfectly as I’ve tested in code editors.
What I am passing back as a return value is:
Return value is type of list of texts however, when I receive this on workflow and map it to a state or db to test the value, it is a list of texts however the entire thing is stored as first item.
Does anybody knows how to resolve this?
Looking forward to any advice possible. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100650.21/warc/CC-MAIN-20231207054219-20231207084219-00191.warc.gz | CC-MAIN-2023-50 | 534 | 5 |
https://www.pokecommunity.com/showthread.php?s=5034abc95719a4ae537a5293849fd50e&p=10118047 | code | Why would i? There is no chance ms would dare to force its costumers hands. Besides it's virtually impossible for them. They were trying with XP for years - even by "burying it".
Why would i?
Well, I can think of 2 reasons.
1) Security patches. If you don't go online, this isn't too much of a problem. Still, there won't be many security patches in the future (unless it's something incredibly major and even then, it's not guaranteed).
2) The inevitable future. Unless you plan on using the same hardware (which I can't say I'd be surprised you would), you'll eventually have to upgrade. Windows 7 is becoming more and more incompatible with newer hardware. Even if you do stick with older machines in the future, software is going to outpace it eventually.
There is no chance ms would dare to force its costumers hands.
You apparently missed many of the forced upgrades from 7/8/8.1 to 10 epidemic that happened recently.
Without PS/2 mouse/keyboard support, it's impossible to get through the installer for Windows 7 on modern hardware.
Microsoft absolutely would force their customer's hands. An previous customer that isn't buying new stuff isn't making them any money.
Well, i might've missed certain things thanks to some not-so-legal modifications to my perfectly legal system. There was this spam about upgrading to the next level but disabling it wasn't much of an issue. Problem is, after upgrading system (hateful "dayz" devs recommended that to me) it was too damn annoying. It needs mods at every possible level - even logonui. Who knows, maybe next "Windows" will use booting process more efficiently and display a block of commercials next to their logo.
They've tried, but until now there was always a way to go around it. Besides, how can they change something beyond their control? XP was undead mostly because it was beyond western flank. Mostly in China. I know, i know - it is a story that is long overdue...
Changes are inevitable but nobody force me to reinstall my os in the nearest future. I won't fight again with this damn screen and its color calibration - hell no.
Windows 2000 is, imho, the best version of Windows to date. I don't dislike 10 anymore than any other version of Windows though. XP and 7 tend to be my least favorites though simply because many users overglorify them. 8/8.1/10 have their own plethora of issues as well. Vista, while not great, isn't as bad as it was made out to be imo.
Most people feel that way, so nothing to be ashamed of honestly. What exactly was uncomfortable though if you don't mind my asking?
Most of time I use my laptop for work. I've been working for a while witi Windows7/8/10 at my previous job and all my work goes through CRM. Otherwise I use laptop only for online communication through crm and that's quite unusual for me ater few years of using another OS. But a lot of other eployees use Linux and they like it.. | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00246.warc.gz | CC-MAIN-2020-10 | 2,896 | 15 |
http://www.interlopers.net/forum/viewtopic.php?f=4&t=3153&start=12255 | code | It is currently Thu Jun 20, 2013 8:44 am
Armageddon wrote:Needs more blend texture on the ground.
I thought the spec map always needed to be greyscale
Habboi wrote:I thought the spec map always needed to be greyscale
Not at all. Some games support colour specs. You'll find blue specs are good for metallic and orange for warmer wooden surfaces.
As for doing it in UDK, you could force it through a vector node and plugging it into a multiply with the spec.
Users browsing this forum: Fuzyhead | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00070-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 493 | 7 |
https://www.pagalguy.com/discussions/ims-simcat-scores-and-discussion-for-the-entire-2013-14-seas-25101371 | code | Can anyone please share previous year simcats
Please ask all your queries related to IMS SIMCATfor 2014-15 season here -
Can anyone plz tel me where is ims branch in chandigarh..I googled n went to d place just to find a lock at that showroom.I think they have shifted..anyone plz who knows where is it?
I know that everybody is scoring above 100 and that it was a very easy paper. But can anyone please analyse the difficulty level w.r.t to actual CAT.
Any PDFs available for IMS mocks?
If yes, please can you share with me not necessarily of this year but of previous years too.
Thanks in anticiption :) 😃
where can we see the result of simcat 4?i gave the test but i dnt knw my percentile.
can somebody share the simcat pdfs of the last 2-3 years . If you can share this year pdfs also , it would even be better | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891228.40/warc/CC-MAIN-20201026115814-20201026145814-00354.warc.gz | CC-MAIN-2020-45 | 817 | 9 |
https://mcpmag.com/Articles/2005/03/16/Indigo-Avalon-Community-Technology-Previews-Released.aspx | code | Indigo, Avalon Community Technology Previews Released
- By Scott Bekker
Microsoft on Wednesday released its first Community Technology Preview (CTP) of "Indigo," the code-name for the company's new communications subsystem for Windows, and a second CTP of "Avalon," a new Windows presentation subsystem.
Indigo and Avalon were originally slated to be two of the four pillars of the Windows "Longhorn" operating system release. In August Microsoft decided to make both available for Windows XP and Windows Server 2003 as well.
Also on Wednesday, Microsoft released a new version of the .NET Framework and an SDK for WinFX, a next-generation programming model for Windows. The releases are available to MSDN subscribers.
The Indigo CTP is not the technology's debut; it was included in the first technical preview of the Longhorn operating system, called "PDC Longhorn," at the Professional Developers Conference in October 2003.
The CTP is the first publicly distributed update to Indigo since the PDC more than a year ago. Although Microsoft released another technical preview of Longhorn at its May 2004 WinHEC show, that version included updates to Avalon but did not change Indigo. Avalon received yet another public refresh in November with the release of the first Avalon CTP.
Microsoft defines Indigo as a new breed of communications infrastructure built around the Web services architecture. "Advanced Web services support in Indigo provides secure, reliable, and transacted messaging along with interoperability. Indigo's service-oriented programming model is built on the .NET Framework and simplifies development of connected systems," according to a company FAQ on Longhorn.
Scott Bekker is editor in chief of Redmond Channel Partner magazine. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817474.31/warc/CC-MAIN-20240420025340-20240420055340-00776.warc.gz | CC-MAIN-2024-18 | 1,754 | 9 |
http://www.sevenforums.com/866901-post7.html | code | I've had this issue. I may have to try disabling/uninstalling RAID drivers, but I'd rather not as I need the read.
Currently the issue seems to be cause by a device/driver causing high DPC latency DPC Latency Checker
Disabling the USB 3.0 controller in the UD3R BIOS seems to have fixed it for me. The issue has always begun with the Windows logo, but I've been running for a couple of hours now without it occurring.
P.S., I haven't had the issue of crashing when playing video with AC3 audio. I don't have any sources with DTS audio streams.
I've had similar audio popping/static when using both a Creative USB headset and a Creative X-Fi discrete sound card. I am not certain, however, that these were the same issue. | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00070-ip-10-171-10-70.ec2.internal.warc.gz | CC-MAIN-2017-04 | 720 | 5 |
https://www.infovision.com/services/intelligent-automation-and-ai | code | Automation has revolutionized businesses with productivity boosts and efficiency gains. Intelligent automation is capable of more.
Many organizations have plugged in RPA (robotic process automation) to exploit pockets of opportunities to drive operational efficiency. Building on this by adding a layer of intelligence and scaling the automation enterprise-wide, we equip the organization to evolve with the operational complexity. AI, with its self-learning capabilities, enables automated platforms to optimize and correct course as the ecosystem around them gets disrupted.
Organizations employing AI-driven automation and data-driven decisions have seen multi-fold impact and have powered through disruptions with effortless digital transformations. We outfit organizations with this ability to achieve man-machine harmonization, supporting them through organizational change and empowering them to predict and overcome the challenges of the evolving normal. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475422.71/warc/CC-MAIN-20240301161412-20240301191412-00088.warc.gz | CC-MAIN-2024-10 | 962 | 3 |
https://refarabulimehapi.alphabetnyc.com/copy-in-linux-with-overwrite-a-file-45502yk.html | code | Operating modes[ edit ] cp has three principal modes of operation.
Both instructions follow the same basic form and accomplish pretty much the same thing: So if both instructions are equivalent, why do they both exist and which one should you use? Read on to find out.
Monthly Docker Tips In the Beginning Unlike the COPY instruction, ADD was part of Docker from the beginning and supports a few additional tricks beyond simply copying files from the build context.
Another form allows you to simply specify the destination directory for the downloaded file: Another feature of ADD is the ability to automatically unpack compressed files. Interestingly, the URL download and archive unpacking features cannot be used together.
Here's a quote from an issue that was logged against the ADD command back in December of It can add local and remote files. It will sometimes untar a file and it will sometimes not untar a file. If a file is a tarball that you want to copy, you accidentally untar it. If the file is a tarball in some unrecognized compressed format that you want to untar, you accidentally copy it.
Obviously, no one wanted to break backward compatibility with existing usage of ADD, so it was decided that a new instruction would be added which behaved more predictably. Anything that you want to COPY into the container must be present in the local build context.
Also, COPY doesn't give any special treatment to archives. If you COPY an archive file it will land in the container exactly as it appears in the build context without any attempt to unpack it.
COPY is really just a stripped-down version of ADD that aims to meet the majority of the "copy-files-to-container" use cases without any surprises. In case it isn't obvious by now, the recommendation from the Docker team is to use COPY in almost all cases.
Really, the only reason to use ADD is when you have an archive file that you definitely want to have auto-extracted into the image. Technically, yes, but in most cases you're probably better off RUNning a curl or wget.
Consider the following example:Linux and Unix cp command tutorial with examples Tutorial on using cp, a UNIX and Linux command for copying files and directories. If a copy operation will overwrite a file the -b flag may be used to create a back up of the file.
This copies the file into place and writes a backup file. George Ornbo is a hacker, futurist, blogger and. The Linux cp command provides you the power to copy files and directories through the command line. In this tutorial, we will discuss the basic usage of this tool using easy to understand examples.
In this tutorial, we will discuss the basic usage of this tool using easy to understand examples. Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems.
Copy a file and overwrite the existing file. mv would not overwrite the existing file, but replace it. – Gilles Jul 26 '14 at | show 9 more comments. up vote 1 down vote. cp is a Linux shell command to copy files and directories.
RapidTables. Home›Code›Linux› cp command cp command in Linux/Unix. cp is a Linux shell command to copy files and directories. cp syntax; cp options; Interactive prompt before file overwrite: $ cp -i test.c bak cp: overwrite 'bak/test.c'?
y. Using an option, such as -i, can make the process a little more useful, because if you want to copy a file to a location that already has a file with the same name, you'll be asked first if you really want to overwrite -- meaning replace -- the file that's already there.
-n – specifies number of times to overwrite file content (the default is 3) shred – overwrite a file to hide its contents. How to Copy a File to Multiple Directories in Linux.
Next. How to Auto Execute Commands/Scripts During Reboot or Startup. Related Articles. | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514893.41/warc/CC-MAIN-20191208202454-20191208230454-00276.warc.gz | CC-MAIN-2019-51 | 3,852 | 17 |
https://azure.microsoft.com/hu-hu/blog/windows-azure-community-news-roundup-edition-7/ | code | Welcome to the latest edition of our weekly roundup of the latest community-driven news, content and conversations about cloud computing and Windows Azure. Here are the highlights from last week.
Articles and Blog Posts
- Developing a Hello World Java Application and Deploying it in Windows Azure – Part 1 by JananiK, DotNetSlackers.com (posted Feb. 17.)
- Tutorial – Going Live with Your First Application on Windows Azure – Part 1 Cloud.Story.in (posted Feb. 17.)
- Combining Multiple Windows Azure Worker Roles into a Windows Azure Web Role blog post by Dina Berry (posted Feb. 16.)
- New Course: Building Windows Phone Applications with Windows Azure by Jason Salmond, the pluralsight blog (posted Feb. 15.)
- Think Big and Microsoft BizSpark Offer Startups $60,000 in Cloud Services on Windows Azure by Allison Way, Think Big blog (posted Feb. 15.)
- Deploying a Windows Azure & SQL Azure Solution with Visual Studio Team Build blog post by Michael Collier (posted Feb. 14.)
- (Windows) Azure Revolution blog post by Juha Harkonen (posted Feb. 14.)
Upcoming Events, User Group Meetings
- Feb. 22: Umbraco 5 and Windows Azure Workshop – Ghent, Belgium
- Feb. 23: NY Windows Azure Summit – New York City
- March 6: UK Windows Azure User Group Meeting – London
- March 8: Windows Azure: Focus on the Application, Not Your Infrastructure (live event)– Waltham, MA
- March 24: Detroit Day of Windows Azure – Detroit, MI
- Ongoing: Cloud Computing Soup to Nuts - Online
Recent Windows Azure Forums Discussion Threads
- Creating a secure session key – 243 views, 6 replies
- Reg Data-tier Applications Node in SQL Management Studio R2 – 205 views, 8 replies
- Blob URL address case sensitive – 332 views, 4 replies
- Store an array as an element in (Windows) Azure Table – 539 views, 5 replies
- Websockets vis Node.js/socket.io – 701 views, 7 replies
Please feel free to send us any articles you come across that you think we should highlight, or content of your own that you’d like to share. And let us know about any local events, groups or activities that you think we should tell the rest of the Windows Azure community about. You can use the comments section below, or talk to us on Twitter @WindowsAzure.
I look forward to hearing from you! | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00011.warc.gz | CC-MAIN-2023-14 | 2,274 | 24 |
https://setht.com/graph-data-structure/ | code | Table of Contents
In the rapidly evolving landscape of data analysis, graph data structures have emerged as a powerful tool for navigating the intricacies of interconnected data sets. This article delves into the multifaceted aspects of graph data structures, from the comparison of graph queries with traditional SQL queries to the challenges of implementing graph databases, and the analytical advantages offered by graph algorithms. It also introduces PuppyGraph, a solution that bridges the gap between SQL and graph analytics, and looks ahead to the future of data analysis with graph structures.
- Graph query languages excel in navigating complex relationships, providing a simpler syntax than SQL for exploring data interconnections.
- Implementing and scaling graph databases pose technical challenges, but tools like PuppyGraph facilitate the integration of graph analytics into SQL environments.
- Graph algorithms, such as clustering and pathfinding, offer deeper insights by accounting for the strength and patterns of connections within data.
- PuppyGraph represents a significant advancement for companies looking to leverage graph capabilities without the complexity of traditional graph databases.
- The future of data analysis is embracing graph structures, which offer a new paradigm for understanding the complex interactions and dependencies within data.
Graph Queries vs SQL Queries
Ease of Navigating Complex Relationships
Graph databases are inherently designed to denote the connections between entities, offering a natural and intuitive way to represent complex relationships. Unlike SQL queries, which may require multiple complex joins to depict intricate relationships, graph query languages provide a syntax that simplifies the exploration of these connections.
For example, in recommendation systems, graph queries can quickly identify links between users, products, and interests, enabling nuanced recommendations. This is a task where SQL, with its reliance on joins and subqueries, may struggle with the complexity and scale.
Graph databases excel in environments where relationships are the focal point, often revealing insights that are difficult to obtain through standard SQL queries.
By mapping out complex relational dynamics, graph databases become a powerful tool for developers dealing with elaborate hierarchies and interlinked datasets, driving better decision-making and fostering innovation.
The Shortest Path Problem: A Case Study
The quest for the most efficient route between two points is a classic problem in graph theory, often referred to as the shortest path problem. This challenge is not just academic; it has practical applications in areas such as network design, transportation, and logistics.
In the context of graph databases, the shortest path problem is addressed using specialized algorithms like Dijkstra’s or the Bellman-Ford algorithm. These algorithms are designed to calculate the minimum distance between nodes in a weighted graph, where each edge has an associated cost or distance.
The power of graph databases becomes evident when dealing with complex networks where traditional SQL queries would require multiple joins and subqueries, leading to a significant increase in complexity and execution time.
Here’s a comparison of two popular algorithms:
|O(V log V + E)
|Weighted graphs without negative edges
|Graphs with negative edge weights
*V represents the number of vertices, and E represents the number of edges in the graph.
Understanding the nuances of these algorithms is crucial for developers and data scientists alike, especially when the efficiency of data retrieval is paramount. The choice of algorithm can greatly affect the performance of graph queries, particularly in large and complex datasets.
Overcoming the Limitations of Traditional Joins
Graph data structures offer a robust alternative to traditional joins, enabling more intuitive modeling of complex relationships. Graph queries simplify traversals across vast networks, where relational databases would require cumbersome and performance-intensive joins.
- Ease of Use: Graph databases use nodes and edges, making them inherently suitable for interconnected data.
- Performance: Efficient graph algorithms can handle deep joins more effectively than relational databases.
- Flexibility: Adding new relationships or nodes to a graph database does not necessitate a schema overhaul.
The integration of graph-based analysis complements existing methodologies, fostering a comprehensive exploration of data’s structure and dynamics.
The transition to graph databases, however, is not without its challenges. It demands a shift in thinking from tabular to relational perspectives, and the need for specialized tooling and expertise can be significant barriers. Despite these hurdles, the benefits of graph databases in navigating complex data landscapes are increasingly recognized, paving the way for more widespread adoption.
Challenges of Implementing and Running Graph Databases
Technical Hurdles in Adoption
The transition to graph databases signifies a paradigm shift from traditional relational models to structures that prioritize relationships. The necessity to rethink data architecture can be a significant barrier to adoption, as it requires not only new knowledge but also a departure from the comfort of SQL queries.
Graph databases necessitate specialized tools for their unique operations, which may not align with existing SQL infrastructure. This often results in:
- Additional investments in new tools
- The need for extensive training
- Integration complexities
The dense interconnectivity of graph databases introduces scaling challenges that are not present in SQL databases. As the network of nodes and edges grows, so does the difficulty in maintaining performance, often necessitating advanced strategies or model adjustments.
Ultimately, these technical hurdles can slow down the adoption process, as organizations must weigh the costs and benefits of integrating graph technology into their existing systems.
Complex ETL Processes for Data Integration
Transitioning from traditional SQL databases to graph databases introduces a significant challenge: the ETL (Extract, Transform, Load) process. This process is not only resource-intensive but also demands a high level of expertise to manage effectively. The transformation of relational data into graph-compatible formats—nodes, edges, and properties—requires meticulous planning and execution.
The integration of graph databases necessitates specialized tooling and a considerable investment in time and resources. As data evolves, maintaining a responsive and up-to-date graph database becomes an ongoing effort.
Moreover, the adoption of graph databases often means investing in new tools and training to bridge the gap with existing SQL infrastructure. This can lead to a complex journey of integration, where the benefits of graph analytics must be weighed against the costs of tooling and expertise requirements.
Scaling Graph Databases for Large Datasets
Scaling graph databases to accommodate large datasets is a multifaceted challenge. The complexity of graph data, with its intricate web of nodes and edges, poses unique scaling difficulties that are not as prevalent in SQL or NoSQL databases. These challenges are often due to the dense interconnectivity of graph structures, where adding more hardware does not necessarily translate to better performance. Instead, it may require a reevaluation of the graph model or the adoption of more sophisticated scaling strategies.
The key to effective scaling lies in understanding the specific demands of graph data and the limitations of existing infrastructure.
Graph databases are unparalleled in their ability to map out complex relational dynamics, making them invaluable for navigating elaborate networks within data. However, the ETL (Extract, Transform, Load) processes and ongoing maintenance demands can be significant, especially as the size of the data grows. To address these issues, developers and data architects must consider both the technical and strategic aspects of scaling.
Alternatives such as graph query engines provide a pathway to leverage graph analytics without the extensive scaling hurdles. Tools like PuppyGraph enable the integration of graph analytics within SQL environments, offering a bridge between the structured world of SQL and the interconnected realm of graph databases.
The Analytical Advantages of Graph Algorithms
Clustering Algorithms and Community Detection
Graph algorithms offer powerful mechanisms for exploring the structure of data. Clustering algorithms in graph theory can identify communities or groups of nodes that are more densely connected to each other than to the rest of the network. This is akin to traditional clustering but enriched by the ability to account for the strength and pattern of connections between data points.
By incorporating graph theory into our analysis of distance metrics, we do more than enhance our ability to cluster and classify; we open the door to a more dynamic and interconnected view of data. This approach allows us to see beyond isolated clusters and individual paths, offering a holistic view of the data ecosystem.
By exploring the underlying structure of networks, patterns and anomalies, community detection algorithms can help you improve the efficiency and effectiveness of your systems and processes.
Applying these concepts to data correlations, we begin to see beyond simple pairwise relationships. We can identify clusters of variables that share strong connections, pinpoint variables that act as central hubs in the network, and trace the paths through which influence or information flows across the entire system.
Pathfinding Algorithms and Efficient Routing
Pathfinding algorithms are essential for analyzing and optimizing routes within a graph. They enable the discovery of the shortest or most efficient paths between nodes, which is crucial in various applications such as logistics, network design, and social network analysis. These algorithms, like Dijkstra’s and A* search, efficiently handle the traversal of complex networks, providing insights that are not readily apparent in non-graph data structures.
The efficiency of pathfinding algorithms is not just theoretical; it has practical implications in real-world scenarios where time and resource optimization are critical.
For example, consider the following table summarizing the performance of different pathfinding algorithms in a hypothetical network analysis:
|Average Time Complexity
|O((V+E) log V)
|Weighted, without negative edges
|Weighted, with heuristics
|Weighted, with negative edges
V represents the number of vertices, and E represents the number of edges in the graph.
The choice of algorithm depends on the specific characteristics of the graph and the nature of the problem being solved. While Dijkstra’s algorithm is well-suited for graphs without negative edge weights, Bellman-Ford can handle graphs with such weights, albeit with higher time complexity.
Comparing Graph and Traditional Clustering Techniques
Graph algorithms offer powerful mechanisms for exploring the structure of data, particularly through clustering. Clustering in graph theory identifies communities by examining the density of connections, a nuanced approach that traditional methods may not capture. This graph-centric perspective allows for a dynamic understanding of data relationships, transcending isolated clusters to reveal a comprehensive data ecosystem.
In contrast, traditional clustering techniques often rely on distance metrics that may overlook the intricate web of relationships between data points. By integrating graph learning, such as Product Space Clustering, analysts can leverage the interconnected nature of data, offering insights into how groups of similar elements, like products, are related.
The adoption of graph-based analysis signifies a paradigm shift in our comprehension of data structures, enabling a more holistic and interconnected view of datasets.
The table below contrasts key aspects of graph-based and traditional clustering methods:
|Static, isolated clusters
|Individual data points
Embracing graph theory not only enhances visualization but also revolutionizes our approach to data analysis. It paves the way for innovative machine learning applications and combinatorial optimization, fundamentally altering our approach to data relationships and analysis.
PuppyGraph: Bridging the Gap Between SQL and Graph Analytics
Integrating Graph Analytics into SQL Environments
The advent of tools like PuppyGraph marks a pivotal moment for organizations that aim to harness the analytical depth of graph queries within their existing SQL frameworks. This integration signifies a major stride in making graph analytics accessible, especially for businesses that previously considered graph capabilities too intricate or beyond their technical reach. By bridging the structured nature of SQL with the dynamic interconnectivity of graphs, PuppyGraph simplifies the transition from traditional to graph-enhanced data analysis.
Traditionally, SQL databases required a complex ETL process to enable graph querying. This involved transforming relational data into graph formats, such as nodes and edges, which was both time-consuming and technically demanding. With the introduction of graph analytics tools that integrate directly into SQL environments, these hurdles are significantly reduced. Developers can now perform graph operations without the need for separate graph databases or the development of complex ETL pipelines.
The integration of graph analytics into SQL environments is not just a technical upgrade but a paradigm shift in data analysis, offering a new lens through which to view and interact with data.
The following table outlines the key benefits of integrating graph analytics into SQL environments:
|Direct use of graph queries within SQL.
|No need for separate graph databases.
|Ability to uncover complex relationships.
|Developers can leverage existing SQL skills.
The Role of Graph Query Engines
Graph query engines have emerged as a pivotal technology in the realm of data analytics, offering a seamless way to execute graph queries within traditional SQL environments. PuppyGraph’s graph query engine enables organizations to swiftly navigate intricate data networks, facilitating transformative real-time decision-making and analytics.
Graph query languages, distinct from SQL, are designed to handle complex, interconnected data with ease. They eliminate the need for cumbersome joins and subqueries, which can become unwieldy in SQL when dealing with highly relational data. For example, graph queries can effortlessly identify relationships in a social network or power recommendation engines by linking users, products, and interests.
By providing a direct method to execute graph queries on data stored in SQL warehouses, PuppyGraph serves as a crucial bridge, treating tabular data as graph structures. This approach negates the need for complex ETL processes, streamlining the integration of graph analytics into existing data systems.
Case Studies: Real-World Applications of PuppyGraph
PuppyGraph’s integration into SQL environments has been transformative for many organizations, enabling them to harness the power of graph analytics without the need to overhaul their existing data infrastructure. The seamless transition from traditional SQL queries to graph queries has opened up new avenues for data analysis and insight generation.
One notable case study involves a social media platform that utilized PuppyGraph to analyze complex networks of user interactions. By leveraging PuppyGraph’s capabilities, the platform was able to:
- Identify influential users within the network
- Understand the spread of information and how it permeates through the user base
- Detect communities and sub-communities based on user interactions and shared interests
The ability to perform these analyses without extensive ETL processes or a separate graph database has made PuppyGraph an invaluable tool for data-driven decision-making.
Another case study highlights a logistics company that integrated PuppyGraph to optimize their routing systems. The company benefited from the pathfinding algorithms provided by PuppyGraph, resulting in:
- Reduced transportation costs
- Improved delivery times
- Enhanced overall operational efficiency
These real-world applications demonstrate PuppyGraph’s versatility and the practical benefits of integrating graph analytics into SQL environments.
The Future of Data Analysis: Embracing Graph Structures
The Paradigm Shift in Data Comprehension
In the multifaceted domain of data analysis, we stand at the precipice of a significant paradigm shift
redefine our comprehension of the networks of variables that underpin the vast expanse of information that surrounds us. At the
core of this transformative shift is an insightful realisation: the intricate interplay of interactions and dependencies that
constitute our datasets can be conceptualised effectively as graphs.
Expanding the assertion that
most things are graphs in disguise serves not just as an observation but as a clarion call for a
paradigm shift in data analysis. This perspective does more than broaden our analytical arsenal; it fundamentally changes how we
approach data, urging us to see beyond the surface to the interconnected fabric that binds variables together. By recognising the
By venturing into this uncharted analytical territory, we invite a renaissance in our understanding of data
This philosophical and methodological shift champions
the cause of analytical diversity, challenging the prevailing reliance on conventional models that often impose artificial
boundaries on our exploratory endeavours. As we venture beyond these confines, the integration of graph-based analysis ushers in
In synthesising this broader analytical vista, we not only enhance our grasp of the complex systems that our data seek to model
but also arm ourselves with the intellectual and technical wherewithal to navigate the increasingly data-driven decision-making
landscapes of the future. By fostering a culture of methodological pluralism, where graph-based analyses are integral, we enrich
our analytical lexicon, enabling a more profound, informed, and nuanced engagement with the world around us. This exploratory
Graphs in Conceptualizing Interactions and Dependencies
In the multifaceted domain of data analysis, we stand at the precipice of a significant paradigm shift
The intricate interplay of interactions and dependencies that constitute our datasets can be conceptualized effectively as graphs. This realization transcends the realm of mere mathematical constructs and ventures into the practical importance of these concepts in various scenarios. For instance, social networks map relationships between individuals, logistics networks outline transportation routes, and molecular biology charts chemical bonds between atoms.
Furthermore, the emphasis on graph structures invites a deeper exploration of the causal mechanisms underlying observable phenomena. Understanding the directionality and influence within networks allows for more than just pattern recognition; it enables the formulation of hypotheses about causality and the dynamics of systems. This is crucial in fields ranging from epidemiology to economics.
This graph-centric approach does more than just offer a new way to visualize correlations; it fundamentally changes how we think about data relationships. By embracing the principles of graph theory, we open up new avenues for dynamic analysis, allowing us to see patterns and connections that were previously obscured.
The Evolution of Data Management with Graph Analytics
The landscape of data management is undergoing a significant transformation with the advent of graph analytics. Graph databases are revolutionizing the way we understand and interact with complex data sets. The shift towards graph structures is not just a trend but a response to the growing complexity and interconnectedness of data.
Transitioning from traditional SQL databases to graph databases involves intricate ETL processes. These processes are resource-intensive and require a substantial investment of time and expertise:
- Complex ETL processes for data integration
- Continuous maintenance to keep the database responsive
- Specialized expertise for managing graph databases
For organizations looking to harness the analytical power of graph queries, solutions like PuppyGraph are paving the way. They offer a seamless integration of graph analytics within SQL environments, making advanced data analysis more accessible.
The integration of graph analytics into existing data management systems is a leap forward in our ability to comprehend and utilize data. It represents a paradigm shift from structured, tabular thinking to a more dynamic, relational approach.
In conclusion, graph data structures and analytics are revolutionizing the way we understand and interact with complex data. By offering a natural representation of relationships and dependencies, graph databases and query languages like Cypher and Gremlin enable deeper insights and more intuitive data exploration compared to traditional SQL databases. Despite the challenges associated with their implementation and adoption, tools like PuppyGraph are making graph analytics more accessible, bridging the gap between the structured world of SQL and the rich interconnectedness of graphs. As we continue to witness the integration of graph capabilities into various industries, the potential for innovation and discovery in data analysis is boundless. The future of data management is undeniably intertwined with the advancement of graph technology, promising to unlock a new horizon of possibilities for businesses and researchers alike.
Frequently Asked Questions
What are the main differences between graph queries and SQL queries?
Graph query languages are designed for navigating complex, interconnected data and allow for simple syntax to explore relationships. SQL queries can struggle to represent this interconnectedness without multiple complex joins.
What challenges do graph databases face in terms of implementation and scaling?
Graph databases offer significant analytical capabilities but come with hurdles such as a steep learning curve, complex ETL processes for data integration, and challenges in scaling for large datasets.
How do graph algorithms enhance data analysis compared to traditional methods?
Graph algorithms are adept at uncovering structures within data, such as identifying dense communities with clustering algorithms or finding efficient paths with pathfinding algorithms, offering insights that might be complex to extract via standard SQL queries.
What is PuppyGraph and how does it bridge the gap between SQL and graph analytics?
PuppyGraph is a tool that integrates graph analytics within SQL data environments, offering a simplified path to accessing graph capabilities without the need for complex ETL processes or new database setups.
How is the future of data analysis being shaped by graph structures?
The future of data analysis is moving towards a paradigm where data is understood as an interconnected network, with graph structures providing a more effective way to conceptualize interactions and dependencies within datasets.
What are the benefits of using a graph database for businesses?
Graph databases can model real-world complex systems and answer challenging questions with powerful data modeling and analysis capabilities, allowing businesses to easily uncover insights within their interconnected data. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297295329.99/warc/CC-MAIN-20240425130216-20240425160216-00870.warc.gz | CC-MAIN-2024-18 | 23,884 | 144 |
https://www.mywebtuts.com/blog/codeigniter-4-application-folderdirectory-structure-tutorial | code | Codeigniter 4 Application Folder/Directory Structure Tutorial
Apr 21, 2022 . Admin
If you need to see example of codeigniter 4 directory structure. We will use php codeigniter folder structure. it's simple example of codeigniter get application folder path. if you have question about codeigniter 4 folder structure then I will give simple example with solution.
Here, Let's see how to work Codeigniter 4 below folder/directory structure follow my simple steps.Step 1: Install Codeigniter 4
First of all if you have not created the codeigniter app, then you may go ahead and execute the below command:
composer create-project codeigniter4/appstarter ci-news
First of all successfully run your project got to your project folder directories bellow given image you looks like Codeigniter 4 new directory structure:
In you codeigniter 4 you have see six directories folder.
So, i will explain each of directories how actually work in codeigniter 4 project./app
Inside /app directory where all of your application code lives. This is a default directory structure that works very well for many applications. The following folders make up the basic contents:
- /Config Stores the configuration files
- /Controllers Controllers determine the program flow
- /Database Stores the database migrations and seeds files
- /Filters Stores filter classes that can run before and after controller
- /Helpers Helpers store collections of standalone functions
- /Language Multiple language support reads the language strings from here
- /Libraries Useful classes that don’t fit in another category
- /Models Models work with the database to represent the business entities.
- /ThirdParty ThirdParty libraries that can be used in application
- /Views Views make up the HTML that is displayed to the client.
Inside \system this folder contains the framework core files. It is not advised to make changes in this directory or put your own application code into this directory.
This directory stores the files that make up the framework, itself. While you have a lot of flexibility in how you use the application directory, the files in the system directory should never be modified. Instead, you should extend the classes, or create new classes, to provide the desired functionality.
All files in this directory live under the CodeIgniter namespace./public
This folder is meant to be the “web root” of your site, and your web server would be configured to point to it./writable
This includes directories for storing cache files, logs, and any uploads a user might send. You should add any other directories that your application will need to write to here, This allows you to keep your other primary directories non-writable as an added security measure./tests
This directory is set up to hold your test files. The _support directory holds various mock classes and other utilities that you can use while writing your tests. This directory does not need to be transferred to your production servers./docs
In this directory is part of your project, it holds a local copy of the CodeIgniter4 User Guide.
It will help you... | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100047.66/warc/CC-MAIN-20231129010302-20231129040302-00224.warc.gz | CC-MAIN-2023-50 | 3,106 | 28 |
https://forums.creativecow.net/docs/forums/post.php?forumid=2&postid=1129292&univpostid=1129292&pview=t | code | I am desperately looking for some help. Frustrating that I can not figure out a way to get the orientation of the backside of a page that is flip from right to left.
Can someone please explain how to use the CC Page Turn when you have two pages back to back, so that the back page won't appear as a Mirror image when flipping.
I am trying to animate the opening of a Gateway fold invitation card. The full open size of the Card is 3600px wide by 1800px height. So the Inside part is split in to 4 equal sizes 900x1800px (Inside01, Inside02, Inside 03 & Inside04). Then I have the front panel which is 1800x1800px. That is pre-composed to two panels 900x900px each (Front01 & Front02).
At frame 0 (time 0s) my card will be in the closed position. Inside01 and Inside04 will be sitting folded over top of Inside02 and Inside03. Then on top of that i also have Front 01 & Front 02 on top of Inside01 & Inside04.
Then I applied the CC Page Turn to Front01 to have it open from right to left. And I also applied "Inside01" as the source for "Front & Back Page", with the anticipation that when Front01 flips open, the back side will be "Inside01".
That's working, but one problem. The text on Inside01 appears like a mirror image.
There must be a simple fix to reverse the orientation. But for the life of me, I can get it. I tired rotating the Inside01 180degrees, parenting reversed duplicate to its back and all that but nothing seems to work!!!
The best way is to precompose your sources. A good rule of thumb with any warp effect as there can be distortions and squeezes you might not think of before hand. With a procomped source you can move scale or in your case flip the content so that when its viewed in the outer comp (with the distortion effect on), it looks correct.
Yes, I finally managed to get it working.... it was bit tricky for me to get the concept as this is first time ever that I was using AE. But nevertheless, the solution was to flip both front panel by -180 degree and then add it as the source to the Inside panel. And I think you are also suggesting the same and thank you. | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503249.58/warc/CC-MAIN-20190221071502-20190221093502-00353.warc.gz | CC-MAIN-2019-09 | 2,098 | 9 |
https://community.hubitat.com/t/is-there-a-community-repository-for-drivers-and-apps/3055 | code | @JMack89427 Dig it. The current repo is owned by you. It looks like it has both 'current' and older versions in it. Something like the
quickGetWaterTemp feature @mike.maxwell is working on could be tracked as a branch and pull request. . . . if people are comfortable with that kind of workflow. I don't want to step on any toes, nor make it harder for people to contribute -- only to organize contributions so they're clear and don't cause regressions, and so people can always find the version they want.
Given my druthers, I would:
- Introduce a 'readme' with details about what is fixed in each version
- Create 'releases' that are tagged versions of the repo so people can readily find the version they want
- Use GitHub issues? Perhaps? To track what is being worked on so the thread doesn't get confused
- Use branches/pull requests to make fixes merging really clear - then people who wanted to work with a branch/beta version could, but others would know what was current and stable.
The latter two I'm unsure about. Are people familiar w/ branching and pull requests? Or would that create a barrier to entry that is too high?
You're the current repo owner, right @JMack89427? (I'm not even 100% sure of that. . . .)
If we did something like this, it could be a natural fit for eventual 'promotion' to eg the
HubitatPublic repo - it looks like that is fairly stalled at the moment, but they do have a repo of contributed code. If we got this stable and versioned, I'd be happy to talk w/ them about how to get this into it once it was complete and stable.
All of this with a big ol' grain of salt - I've been lurking and I think I have a feel for what's happening, but, again, don't want to step on any toes!
Say the word, I'll do the above - let me know what repo to do it in. I could even create a new one with a clear history and versions and make everyone here a contributor/owner. | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528523.35/warc/CC-MAIN-20190420040932-20190420062932-00094.warc.gz | CC-MAIN-2019-18 | 1,894 | 13 |
https://heredragonsabound.blogspot.com/2016/10/welcome.html | code | |An example map from Martin O'Leary's fantasy map generator|
I was quite intrigued by Martin's map generator. (So was a lot of the Internet. And even National Geographic.) But unlike most folks, I accepted Martin's invitation to grab the code and start playing with it.
|An example map from my modified map generator|
For the past month or so, I've continued to modify and extend Martin's map generator. His maps reminded me of something you might see inside the front cover of an old fantasy paperback, so I made them look more like an old book page. I added color, and different ways to display the maps.
Along the way I've run into some interesting problems and insights. I've posted some of these to the procedural generation Reddit, but I don't want to overstay my welcome there with constant posting, so I decided to start this blog up to capture some of my work in creating this map generator. If that's the sort of thing you find interesting, I welcome your attention! My posting here will no doubt be erratic and meandering, so I encourage you to subscribe through RSS, email or some other method to be notified of new material. | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193716.70/warc/CC-MAIN-20170322212953-00430-ip-10-233-31-227.ec2.internal.warc.gz | CC-MAIN-2017-13 | 1,137 | 5 |
https://www.mtechprojects.com/ieee-dsp-projects/79351-high-dimensional-mvdr-beamforming-optimized-solutions-based-on-spiked-random-matrix-models-2018.html | code | PROJECT TITLE :
High-Dimensional MVDR Beamforming: Optimized Solutions Based on Spiked Random Matrix Models - 2018
Minimum variance distortionless response (MVDR) beamforming (or Capon beamforming) is among the foremost common adaptive array processing strategies because of its ability to supply noise resilience whereas nulling out interferers. A practical challenge with this beamformer is that it involves the inverse covariance matrix of the received signals, which should be estimated from data. Beneath modern high-dimensional applications, it's well-known that classical estimators will be severely suffering from sampling noise, that compromises beamformer performance. Here, we tend to propose a new approach to MVDR beamforming, that is suited to high-dimensional settings. In particular, by drawing an analogy with the MVDR problem and therefore the thus-called “spiked models” in random matrix theory, we propose robust beamforming solutions that are shown to outperform classical approaches (e.g., matched filters and sample matrix inversion techniques), in addition to additional robust solutions, such as methods based on diagonal loading. The key to our technique is the look of an optimized inverse covariance estimator, that applies eigenvalue clipping and shrinkage functions that are tailored to the MVDR application. Our proposed MVDR resolution is simple, in closed form, and straightforward to implement.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00658.warc.gz | CC-MAIN-2022-49 | 1,541 | 5 |
http://bluxte.net/node?page=34 | code | I guess every parent (including me) had this kind of experience of sick babies crying and puking the whole night long and feeling miraculously better when the day comes, while their parents are "dragging ass", as Russel says.
Keep going, Russ. This kind of inconvenience goes away as children grow. But others are coming ;-)
Side note : I'm wondering how Russel can write so much on his blog, including after a sleepless night...
I prepared a long entry about this, but you won't read it as I applied Stefano's two-emails pattern and trashed it. Some parts of it, though : Ivelin is a committer, and - although I'm not a native english speaker - "committer" means "commitment". I don't feel that in Ivelin's behaviour : moaning about 2.1 release, about "others" that have broken "his" XMLForm, etc.
Ah, and food for thought : look at Ivelin's bio at O'Reilly. Is patending an XML protocol compatible with the opensource spirit ? Guess my opinion...
Time to blog again. I went through some difficult weeks with too much work... both at work and at home. This is going better now : I only have too much work... at work !
An important Cocoon-related news these days is the 1.0 release of sunBow, an Eclipse plugin
that provides a number of feature for Cocoon users : tree-oriented
sitemap editor, XML editor, XSL debugger and other goodies.
Although - as a Cocoon hard-core user - I prefer writing sitemaps
directly in XML, sunBow provides a missing piece to make Cocoon more
accessible to a wider range of non-techie users.
Noteworthy also : sunBow is free. Thanks, S&N !
Frustrated, that's what I am currently.
Frustrated about not being able to participate in Cocoon development when so much
is going on so fast there. And I'm starting to post
some rants, which is clearly not my habit and reveals this
It has been months since I've not commited something significant.
Furthermore, I've been nominated as an Avalon committer more than one
month ago and didn't had the time to make a single commit there.
Continue reading »
We have a saying in France : "in the land of blind people, one-eyed ones are kings". Marc obviously met the one-eyed people of that company. But one-eyed kings can only keep their crown if the land's frontiers are kept closed, because the outside world is full of two-eyed people...
Now I sometimes think the opensource-java land is populated by strange aliens that have three eyes. The third eye is the community that we are living in : nice projects and smart people decuple the knowledge... and widens the gap with one-eyed kings.
Beware, ignorant kings, Cluetrain is at work, and frontiers will fall down rather sooner than later !
I'm honoured to be part of the Avalon project. Even if it has gone through major difficulties recently (and they're not totally over), I consider Avalon to gather some of the most talented software architects I've seen. I hope I'll be up to Avalon's high standards !
Continue reading »
Most of the active committers (including me) are members of the new Cocoon PMC. We now have to write our charter and rules.
This move will allow to better organize the activity around Cocoon, by allowing dedicated projects for Cocoon-related projects or features (the so-called "blocks") that don't fit into the core. | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708882773/warc/CC-MAIN-20130516125442-00039-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 3,264 | 28 |
https://support.rebrandly.com/hc/en-us/articles/224740328-Rebrandly-for-iPhone-Demo | code | This tutorial will walk you through how to use Rebrandly on your iOS or iPhone device.
Go here to download the Rebrandly iOS app.
This video is about:
- How to use the Rebrandly iOS app
- How to brand links from my iPhone
- How to share links on mobile
- How to share links while not at my desktop | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103464.86/warc/CC-MAIN-20231211013452-20231211043452-00497.warc.gz | CC-MAIN-2023-50 | 297 | 7 |
http://occupysf.net/index.php/2018/11/07/hub-2-0-in-berkeley/ | code | We have a needs list from Snubbed by the HUB 2.0. They are located at Fairview and Adeline, in Berkeley. Michelle Lot is the contact person. These winter lists rarely change. Wind and cold mixed with precipitation suck when you are outside.
Solar system, tents, tarps, sleeping bags, pads, hand warmers, rain gear, socks, and coffee.
OK, I added the coffee. | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660070.15/warc/CC-MAIN-20190118110804-20190118132804-00227.warc.gz | CC-MAIN-2019-04 | 357 | 3 |
https://nextfrontiers.com.br/evento/nfcc2022/programacao/palestrante/105979 | code | Página Inicial » Convidados
Asaf Zviran is the co-founder, CEO and CSO of C2i Genomics, a company that advances personalized cancer drug development and treatment by providing vital real-time cancer detection and monitoring at a global scale. Dr. Zviran lead the company development from Academy research concept to VC-backed growth-stage company.
10:40 às 12:00 - Insights in Cancer Research - Session II - Circulating tumor DNA: Current and Future applications
10:40 às 11:10 - Ultra-sensitive detection and monitoring of solid cancers using whole-genome mutation integration | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104249664.70/warc/CC-MAIN-20220703195118-20220703225118-00176.warc.gz | CC-MAIN-2022-27 | 581 | 4 |
https://www.rapid7.com/db/vulnerabilities/linuxrpm-RHSA-2010-0423/ | code | Kerberos is a network authentication system which allows clients andservers to authenticate to each other using symmetric encryption and atrusted third party, the Key Distribution Center (KDC).A NULL pointer dereference flaw was discovered in the MIT Kerberos GenericSecurity Service Application Program Interface (GSS-API) library. A remote,authenticated attacker could use this flaw to crash any server applicationusing the GSS-API authentication mechanism, by sending a specially-craftedGSS-API token with a missing checksum field. (CVE-2010-1321)Red Hat would like to thank the MIT Kerberos Team for responsibly reportingthis issue. Upstream acknowledges Shawn Emery of Oracle as the originalreporter.All krb5 users should upgrade to these updated packages, which contain abackported patch to correct this issue. All running services using the MITKerberos libraries must be restarted for the update to take effect. | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00797.warc.gz | CC-MAIN-2022-40 | 918 | 1 |
http://www2.port.ac.uk/school-of-computing/staff/miss-alaa-mohasseb.html | code | School of Computing
Miss Alaa Mohasseb
- Qualifications: BSc MSc
- Role Title: Research Student
- Address: Buckingham Building, Lion Terrace, Portsmouth PO1 3HE UK
- Telephone: +44 (0)23 9284 6400
- Email: [email protected]
- Department: School of Computing
- Faculty: Faculty of Technology
I am a research student in the School of Computing at the University of Portsmouth. I received my Master of Science in Information Systems as well as my Bachelor of Business Administration and Management Information Systems from the Arab Academy of Science and Technology and Maritime Transport, Alexandria, Egypt.
I am a Microsoft Certified Trainer (MCT), Microsoft Certified Professional Developer (MCPD), Microsoft Certified Technology Specialist (MCTS), Microsoft Certified Professional (MCP) and Adobe Certified Expert in Photoshop CS4.
My research interests include Natural language processing, Information Retrieval, Text Classification, Search Engines, Web Queries, Semantic Web, Data Mining and Artificial Intelligence.
INSTICC (Institute for Systems and Technologies of Information, Control and Communication) since 2014
- Natural Language Processing
- Information Retrieval
- Text Classification
- Search Engines
- Web Queries
- Semantic Web
- Data Mining
- Artificial Intelligence
- Transformation of Discriminative Single-Task Classification into Generative Multi-Task Classification in Machine Learning Context. Proceedings of 9th International Conference on Advanced Computational Intelligence (ICACI 2017) February 2016
- Automated Identification of Web Queries using Search Type Patterns. Proceedings of the 10th International Conference on Web Information Systems and Technologies (WEBIST 2014), Barcelona, Spain (pages 295-304).
- Pharmacy Information Systems: One-year team work project aiming to create an inventory tracking system for pharmacies. The project involved system analysis, database modelling, process modelling, and a prototype creation. | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211664.49/warc/CC-MAIN-20180817025907-20180817045907-00193.warc.gz | CC-MAIN-2018-34 | 1,962 | 24 |
https://forums.creativecow.net/docs/forums/post.php?forumid=346&postid=4619&univpostid=4619&pview=t | code | I am trying to use WinProducer to produce SVCD or VCD video clips or movies. If I burn the movie directly on the CD there is no problem I can play it on the DVD player and it has really good quality. But if I choose the option to store it on hard disk so that I can burn it later on CD then there is a big problem. Because in that case the DVD rejects the CD. I examined the two CDs. They have identical contents. Physically though the valid CD has an additional track of 6 seconds. I tried various techniques using NERO, VCDEasy and other similar packages to sort out the problem. Also the issue has been sent to Intervideo and numerous other forums with no tangible result. I am wondering is there noone else in the world who uses WinProducer and has a similar problem. Any help will be greatly appreciated.
This may or may not help, get the CD that plays right, put it back in the burner/rom and copy content to your desktop and burn it with nero to see if it works that way, if it does it is probably a setting or a bug within the software itself.
Thanks for the suggestion. I tried it. I copied the two folders which are on the good CD to the desktop. Then using NERO I burned them on a new CD. This new CD does not work. Then I produced an image from the good CD onto the desktop and burned this image via NERO onto another new CD and this one works. My conclusion is that WinProducer adds a small track to the content folders when it burns the CD and this track makes the CD recognizable to the DVD player. But when NERO gets these content folders, it does not or can not add this track and the result is the CD is rejected by the DVD player. In fact in the last couple of days I did a series of similar tests and repeatedly I saw a short additional track of 6 seconds on the good CDs. Of course it is not possible to see what this track contains. To summarize unfortunately I am trying to debug WinProducer 3 to get it to work, whereas the help desk of the software vendor prefers to ignore the complaints. In any case any further suggestions will be sincerely welcome. | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646636.25/warc/CC-MAIN-20180319081701-20180319101701-00484.warc.gz | CC-MAIN-2018-13 | 2,077 | 3 |
http://www.codeproject.com/script/Articles/ListAlternatives.aspx?aid=13170 | code | Members may post updates or alternatives to this current article in order to show different
approaches or add new features.
No alternatives have been posted.
ZengXi is a SOHO guy. Her expertise includes ATL, COM, Web Service, XML, Database Systems and Information Security Technology. Now she is interested in Instant Messages softwares. She also enjoys her hobbies of reading books, listening music and watching cartoons. | s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639482.79/warc/CC-MAIN-20150417045719-00097-ip-10-235-10-82.ec2.internal.warc.gz | CC-MAIN-2015-18 | 422 | 4 |
https://www.cake.co/conversations/8psrh09/linux-4-16-is-out | code | A block quote would be a great way to share an excerpt from another source without resorting to an image. To create a block quote, type or paste some text into your post, then select it, and in the formatting toolbar that appears click the button with a quote (") on it. It'll create a quoted block of text like this:
So the take from final week of the 4.16 release looks a lot like rc7, in that about half of it is networking. If it wasn't for that, it would all be very small and calm.
We had a number of fixes and cleanups elsewhere, but none of it made me go "uhhuh, better let this soak for another week". And davem didn't think the networking was a reason to delay the release, so I'm not.
End result: 4.16 is out, and the merge window for 4.17 is open and I'll start doing pull requests tomorrow.
Outside of networking, most of the last week was various arch fixlets (powerpc, arm, x86, arm64), some driver fixes (mainly scsi and rdma) and misc other noise (documentation, vm, perf). | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256215.47/warc/CC-MAIN-20190521022141-20190521044141-00553.warc.gz | CC-MAIN-2019-22 | 990 | 5 |
http://wppluginmarket.com/virtual-disk/virtual-disk-service-error-surface-pro.html | code | Virtual Disk Service Error Surface Pro
If you have any questions towards using MiniTool Partition Wizard, welcome to leave comment to let us know. If you can get the disk to go Online on another computer, the problem is most likely due to the configuration of the computer where the disk does not go Online. When using diskpart to convert a hard drive disk from MBR to GPT or vice versa, you will undoubtedly receive the following error message: Virtual Disk Service error: The specified disk Sunday, December 14, 2014 7:01 PM Reply | Quote 0 Sign in to vote I think this might help more. his comment is here
SUPER HATE THIS Same issue! No other user action is possible for basic volumes. Then click "OK" to go back to the main interface. If you are a partner, you could try the Microsoft Partner Forums. news
Disk Management Unable To Connect To Virtual Disk Service
VampireKingcoming 513.149 görüntüleme 4:04 Daha fazla öneri yükleniyor... Kapat Evet, kalsın. For instructions describing how to reactivate a volume, see http://go.microsoft.com/fwlink/?LinkId=64115. To manage disks on remote computers that do support VDS, you must configure the Windows Firewall on both the local computer (where you are running Disk Management) and the remote computer.
Reactivating a missing or offline dynamic disk Using the Windows interface Using a command line To reactivate a dynamic disk by using the Windows interface In Disk Management, right-click the disk Error: ‘A device attached to the system is not functioning' Failed to open attachment ‘Drive Letter:pathVirtual Hard driversVMNAME_########-####-####-####-############.vhd'. Oturum aç 31 13 Bu videoyu beğenmediniz mi? Disk Management Could Not Start Virtual Disk Service Thank you very much!
However, there are logical partitions included in the extended partition. Disk Management Connecting To Virtual Disk Service Hangs Daha fazla göster Dil: Türkçe İçerik konumu: Türkiye Kısıtlı Mod Kapalı Geçmiş Yardım Yükleniyor... Step 3: press "Apply" to save changes. https://technet.microsoft.com/en-us/library/cc771775(v=ws.11).aspx For GPT Disks...
Disk Management Connecting To Virtual Disk Service Hangs
Name * Email * Website Comment Add Comment Notify me of follow-up comments by email. https://msdn.microsoft.com/en-us/library/windows/desktop/bb986750(v=vs.85).aspx For information about run-time requirements for a particular programming element, see the Requirements section of the documentation for that element. Disk Management Unable To Connect To Virtual Disk Service A dynamic volume's status is Healthy (At Risk). Unable To Connect To Virtual Disk Service Windows 10 What kind of technician are you?
This version of the Windows SDK can be used to develop VDS applications for Windows Server 2003, Windows Vista, and later. this content A dynamic disk's status is Offline or Missing. The disk should be marked Online after the disk is reactivated. After you install a new disk, the operating system must write a disk signature, the end of sector marker (also called signature word), and a master boot record or GUID partition Unable To Connect To Virtual Disk Service Server 2008 R2
C:\Windows\system32>attrib -h M:\RecoveryImage/install.wim Where "M" is the letter of said partition you unhid... Yükleniyor... Error 3 - The specified disk is not convertible. weblink Windows7Forums 1.280.198 görüntüleme 5:39 Hard Drive Not Showing Up In Computer | How To Fix It (For all Windows Versions) - Süre: 3:30.
Cause: The basic or dynamic volume cannot be started automatically, the disk is damaged, or the file system is corrupt. Unable To Connect To Virtual Disk Service Server 2012 R2 The answer is: you can try using MiniTool Partition Wizard Bootable Edition to help you. For more information about volume status descriptions, see http://go.microsoft.com/fwlink/?LinkId=64113.
In certain situations, deleting the partition is not an option.
In some cases, an unreadable disk has failed and is not recoverable. Switch Visual Studio MSDN Library The topic you requested is included in another documentation set. Removable media devices (such as Zip or Jaz drives), and optical discs (such as CD-ROM or DVD-RAM) are always automatically mounted by the system. Virtual Disk Service Error The Service Failed To Initialize For more information about disk status descriptions, see http://go.microsoft.com/fwlink/?LinkId=64112.
Proposed as answer by hefty_merv Thursday, October 13, 2016 2:36 AM Thursday, October 13, 2016 2:36 AM Reply | Quote Microsoft is conducting an online survey to understand your opinion of Your very WELCOME!!! CDROMs and DVDs are examples of disks that are not convertable. check over here Did the page load quickly?
If not, return the disks to the Online status. Close Not a member yet? Not sure for 2008, but for 2012 I solved it by DISKPART>"SET ID=ebd0a0a2-b9e5-4433-87c0-68b6b72699c7". Please ignore the warning message and click "Apply".
Step 3: in the pop-up window, drop down the file system option box to select FAT32 as the target file system. Gezinmeyi atla YükleOturum açAra Yükleniyor... Error 4 - Only the first 2TB are usable on large MBR disks. Thus if a data partition is employed with NTFS file system and you want to convert it to FAT32, you can have a try on its "Convert NTFS to FAT 32"
Windows Vista with SP 1 Enable Remote Volume Management Exception. Backup Operator or Administrator is the minimum membership required to perform these steps. At the DISKPART prompt, type online. Value Description list disk Displays a list of disks and information about them, such as their size, amount of available free space, whether the When trying to use diskpart to extend a partition on basic disk, the following error message may appear: Virtual Disk Service error: The size of the extent is less than the
online Brings an offline disk or volume with focus online. Thursday, May 05, 2016 1:36 PM Reply | Quote 0 Sign in to vote Worked for me! When any third-party backup software loads its tape device driver, the software can sometimes result in FSDepends.sys and VHDMP.sys not initializing correctly. | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651780.99/warc/CC-MAIN-20180325025050-20180325045050-00778.warc.gz | CC-MAIN-2018-13 | 6,133 | 19 |
https://www.blogger.com/profile/14705279258623569867 | code | On Blogger since September 2007
Profile views - 996
|Location||Yorkshire, United Kingdom|
|Introduction||The Helena Callum blog is the place to find information about knitting patterns by Helena Callum.
From 2005-2008 I attended evening classes to learn jewellery making and silversmithing. The Breezily Way blog was a record of these creative efforts (and some others too) and hosts a collection of sources of inspiration and information - but it is no longer maintained. | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743968.63/warc/CC-MAIN-20181118052443-20181118074443-00405.warc.gz | CC-MAIN-2018-47 | 472 | 5 |
https://www.parkgateprimary.org.uk/monday-56/ | code | Continue to read at home each day.
Oxford Owl/Read Write Inc. e-books: https://home.oxfordowl.co.uk/books/free-ebooks/
Today, there will be two new sounds to learn. Read the sounds and the words containing the sounds.
Practise spelling the words containing the new sounds and recap previous Set 3 sounds.
Pick 3 common exception words from this list each day to practise spelling - https://cdn.oxfordowl.co.uk/2019/08/29/13/48/38/98b01b1e-5cd2-47f6-a592-f97cebd0b777/CommonExceptionWords_Y1.pdf
Today you are going to practise forming some letters and using alliteration.
Online lesson - https://www.bbc.co.uk/bitesize/articles/zh8c47h
Watch the videos and complete the activities.
It is really important to be able to recall number bonds to 20 fluently - continue practising your number bonds by playing these games.
Number bonds to 10 - https://www.twinkl.co.uk/resource/T-GO-01-number-bonds-1-to-10
Number bonds to 20 - https://www.twinkl.co.uk/resource/T-GO-02-number-bonds-1-to-20
To access these games, you will need a Twinkl account. You can create one here - https://www.twinkl.co.uk/offer
It is free and will give you access to lots of other resources. The offer code is UKTWINKLHELPS.
Today, you will be learning about what people do on UK seaside holidays.
Have you ever been to a beach in the UK? Have you ever been on a UK seaside holiday? How did you get there?
* At the beach - What can you do on the sand? What can you do in the sea? What can you eat? What can you do around the beach? (Restaurants/cafes/arcades/entertainment etc).
* On the PowerPoint - have a look at the pictures and talk about what the people are doing.
Activity - Draw pictures and label them of things you can do on a UK seaside holiday. | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00120.warc.gz | CC-MAIN-2021-31 | 1,726 | 18 |
https://www.storj.io/blog/development-update-35-from-storj-labs | code | Hello Storjlings! We took a little sabbatical from our normally scheduled bi-weekly development updates, but during that time we did a few small things.
We launched Tardigrade into GA production! Ok, it was a BIG thing. What this means is that anyone can start utilizing our decentralized cloud storage network by going to tardigrade.io/satellites and creating an account. We have over 5,000 users with accounts already and we're hoping to continue to drive growth on our network with the next set of open source, partner connectors we're building. These connectors allow users to easily integrate Tardigrade as the backend storage layer to their applications. They also help open source projects generate revenue, as we give these partners a very healthy cut of the revenue generated from every GB of data stored or retrieved on Tardigrade. If there's a connector you would find useful or would like to build, please reach out to us on the forum.
Along with launching the GA production release of Tardigrade, we've had some major changes within Storj Labs. We switched our espresso intake from cappuccinos to cortados to increase the velocity at which we can ingest caffeine. We also reorganized our entire company around three major teams: The Tardigrade User Growth Team, The Storage Node Operator Growth Team, and The Storj Network Maintain Team. We made this change to give our teams more autonomy over specific goals, to enable us to iterate more quickly on functionality based on user feedback, and to ensure company-wide alignment on our strategic initiatives. Our focus is to give Tardigrade and Storj users the best possible experience.
We hope you, your families, and your teams are staying safe and healthy during these difficult times with the global situation and Covid-19. We're doing great so far. Since we're already quite decentralized—with 45 employees across 10 countries and 22 cities—the Covid-19 crisis hasn't impacted the day-to-day function of our team and our fingers are crossed that this trend will continue.
- Tardigrade has officially launched and we're beyond thrilled we've finally reached this milestone. Thanks to our fantastic community members for supporting us throughout this long journey. If you've read all 35 of our development updates (or even half) this includes you!
- We finished the libuplink 1.0 implementation. This helps community members integrate Tardigrade at the programmatic layer.
- We improved the new user flow for creating your first project along with other UI and UX enhancements in the Satellite web interface.
- We increased the API rate limits to allow for more concurrent operations on the network.
In Our Next Post, We'll Cover:
- What Open Source Partner connectors are ready, and which connectors are almost finished.
- Our implementation for Storage Node uptime disqualification.
- Refactoring our billing implementation in order to support multiple projects per user.
- Displaying the held amount, along with other payment information on the Storage Node dashboard.
- Implementing distributed tracing across the network, so we can measure operations across the Uplinks, Satellites, and Storage Nodes to determine where we have bottlenecks.
- SNOBoard UI Enhancements like - mobile adaptation, black and white theme
- Linux Installer and Autoupdater
For More Information: | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100724.48/warc/CC-MAIN-20231208045320-20231208075320-00892.warc.gz | CC-MAIN-2023-50 | 3,343 | 17 |
https://deepai.org/publication/estimating-tail-probabilities-of-the-ratio-of-the-largest-eigenvalue-to-the-trace-of-a-wishart-matrix | code | Consider independent and identically distributed (iid) -dimensional observations
from a real or complex valued Gaussian distribution with mean zero and covariance matrix. Here is an unknown scaling factor and is the identity matrix. Define the data matrix , and assume are the ordered real eigenvalues of the sample covariance matrix , where denotes the conjugate transpose. Note that if , the last of the s are zero. Let be the ratio of the largest eigenvalue to the trace, viz.
We are interested in estimating the rare-event tail probability where is some constant such that is small. Estimating rare-event tail probabilities is often of interest in multivariate data analysis. For instance, in multiple testing problems, it is often needed to evaluate very small
-values for individual test statistics to control the overall false-positive error rate.
The random variableplays an important role in multivariate statistics when testing the covariance structure. For instance, it has been used to test for equality of the population covariance to a scaled identity matrix, viz.
with unknown, i.e., the so-called sphericity test; see, e.g., muirhead2009aspects . The test statistic
does not depend on the unknown variance parameterand has high detection power against alternative covariance matrices with a low-rank perturbation of the null . In particular, under the alternative of rank-1 perturbation with for some unknown and , the likelihood ratio test statistic can be written as a monotone function of and therefore corresponds to the -value; see, e.g., (muirhead2009aspects, ; bianchi2011performance, ). Please refer to krzanowski2000principles ; muirhead2009aspects ; paul2014random for more discussion and many other applications.
The exact distribution of is difficult to compute, especially when estimating rare-event tail probabilities. Note that follows a Wishart distribution , with for real Gaussian and for complex Gaussian. So the distribution of corresponds to that of the ratio of the largest eigenvalue to the trace of a . However, this distribution is nonstandard and exact formulas based on it typically involve high-dimensional integrals or inverses of Laplace transforms. Numerical evaluation has been studied in davis1972ratios ; schuurmann1973distributions ; kuriki2001tail ; kortun2012distribution ; wei2012exact ; chiani2014distribution
. But for high-dimensional data with large, the computation becomes more challenging, which is notably the case when is small, due to the additional computational cost to control the relative estimation error of .
The asymptotic distribution of with and both going to infinity has also been studied in the literature. It is known that asymptotically behaves similarly to the largest eigenvalue , whose limiting distribution has been studied in johansson2000shape ; johnstone2001 , and also asymptotically follows the Tracy–Widom distribution; see, e.g., (bianchi2011performance, ; nadler2011distribution, ). That is,
where denotes the Tracy–Widom distribution of order , with or for real and complex valued observations, respectively. In particular, for real-valued observations, the centering and scaling constants
lead to a convergence rate of the order ; see (ma2012accuracy, ). For the complex case, similar expressions can be found in karouicomplexrate . Nadler nadler2011distribution studied the accuracy of the Tracy–Widom approximation for finite values of and . He found that the approximation may be inaccurate for small and even moderate values of when is large. Therefore, he proposed a correction term to improve the approximation result, which is derived using the Fredholm determinant representation, and he showed that the approximation rate is when follows a complex Gaussian distribution. In the real Gaussian case, which is of interest in many statistical applications, Nadler nadler2011distribution conjectured that the result also holds. The calculation of the correction term in nadler2011distribution depends on the second derivative of the non-standard Tracy–Widom distribution, which usually involves a numerical discretization scheme.
Another limitation of the existing methods is that they may become less efficient when estimating small tail probabilities of rare events. This paper aims to address this rare-event estimation problem. In particular, we propose an efficient Monte Carlo method to estimate the exact tail probability of by utilizing importance sampling. The latter is a commonly used tool to reduce Monte Carlo variance and it has been found helpful to estimate small tail probabilities, especially when the event is rare, in a wide variety of stochastic systems with both light-tailed and heavy-tailed distributions; see, e.g., (SIE76, ; AsmKro06, ; DupLedWang07, ; ASMGLY07, ; BlaGly07, ; LiuXu12, ; LiuXuTomacs, ; xu2014rare, ).
An importance sampling algorithm needs to construct an alternative sampling measure (a change of measure) under which the eigenvalues are sampled. Note that it is necessary to normalize the estimator with a Radon–Nikodym derivative to ensure an unbiased estimate. Ideally, one develops a sampling measure so that the event of interest is no longer rare under the sampling measure. The challenge is of course the construction of an appropriate sampling measure, and one common heuristic is to utilize a sampling measure that approximates the conditional distribution ofgiven the event . This paper proposes a change of measure that asymptotically approximates the conditional measure . We carry out a rigorous analysis of the proposed estimator for and show that it is asymptotically efficient. Simulation studies show that the proposed method outperforms existing approximation approaches, especially when estimating probabilities of rare events.
The remainder of the paper is organized as follows. In Section 2, we propose our importance sampling estimator and establish its asymptotic efficiency in Theorem 1. Numerical results are presented in Section 3 to illustrate its performance. We discuss the possibility of generalizing the result to the ratio of the sum of the largest eigenvalues to the trace of a Wishart matrix in Section 4. The proof of Theorem 1 is given in Section 5.
2 Importance sampling estimation
For ease of discussion, we consider the setting , and . When , the algorithm and theory are essentially the same up to switching labels of and , which is explained in Remark 4. We use the notation to denote the real Wishart Matrix () and complex Wishart matrix (). Since is invariant to , the analysis does not depend on the specific values of , and we take as follows in order to simplify the notation and unify the real and complex cases under the same representation, as specified in Eq. (4) below:
When , we assume that . That is, the entries of are iid , and are the ordered eigenvalues of .
When , we assume . We consider the circularly symmetric Gaussian random variable (tse2005fundamentals, ), and we write when and are iid . In the following, we assume that the entries of are iid , and that are the ordered eigenvalues of .
As mentioned, e.g., in (dumitriu2002matrix, ), the eigenvalues
are distributed with probability density function
when , where is a normalizing constant given by
Then the target probability can be written as
where is the indicator function. As discussed in the Introduction, direct evaluation of the above -dimensional integral is computationally challenging, especially when is relatively large.
This work aims to design an efficient Monte Carlo method to estimate . We first introduce some computational concepts from the rare-event analysis literature, which helps to evaluate the computation efficiency of a Monte Carlo estimator.
Consider an estimator of a rare-event probability , which goes to 0 as . We simulate iid copies of , say , and obtain the average estimator . We want to control the relative error such that for some prescribed ,
Consider the direct Monte Carlo estimator as an example. The direct Monte Carlo directly generates samples from the density (4) and uses So in each simulation we have a Bernoulli variable with mean
. According to the Central Limit theorem, the direct Monte Carlo simulation requiresiid replicates to achieve the above accuracy, where the notation is defined as follows. For any and depending on , means that This implies that the direct Monte Carlo method becomes inefficient and even infeasible as .
Note that (5) is equivalent to
for any . In addition, since and
by Hölder’s inequality, (5) is also equivalent to
When is asymptotically efficient, by Chebyshev’s inequality,
and therefore (6) implies that we only need , for any , iid replicates of . Compared with the direct Monte Carlo simulation, efficient estimation substantially reduces the computational cost, especially when is small.
To construct an asymptotically efficient estimator, we use the importance sampling technique, which is an often used method for variance reduction of a Monte Carlo estimator. We use to denote the probability measure of the eigenvalues . The importance sampling estimator is constructed based on the identity
where is a probability measure such that the Radon–Nikodym derivative is well defined on the set , and we use and to denote the expectations under the measures and , respectively. Let be the density function of the eigenvalues under the change of measure . Then, the random variable defined by
is an unbiased estimator of under the measure . Therefore, to have asymptotically efficient, we only need to choose a change of measure such that
To gain insight into the requirement (7), we consider some examples. First consider the direct Monte Carlo with ; the right-hand side of (7) then equals which is smaller than 1. On the other hand, consider to be the conditional probability measure given , i.e., ; then the right-hand side of (7) is exactly 1. Note that this change of measure is of no practical use since depends on the unknown . But if we can find a measure that is a good approximation of the conditional probability measure given , we would expect (7) to hold and the corresponding estimator to be efficient. In other words, the asymptotic efficiency criterion requires the change of measure to be a good approximation of the conditional distribution of interest.
Following the above argument, we construct the change of measure as follows, which is motivated by a recent study of Jiang et al. xu2016rare . These authors studied the tail probability of the largest eigenvalue, i.e., with and proposed a change of measure that approximates the conditional probability measure given in total variation when . It is known that the asymptotic behaviors of and are closely related. We therefore adapt the change of measure to the current problem of estimating . However, we would like to clarify that the problem of estimating is different from that in xu2016rare in terms of both theoretical justification and computational implementation, which is further discussed in Remark 3.
Specifically, we propose the following importance sampling estimator.
Every iteration in the algorithm contains three steps, as follows:
We use the matrix representation of the -Laguerre ensemble introduced in dumitriu2002matrix , and generate the matrix where is a bidiagonal matrix defined by
denotes the square root of the chi-square distribution withdegrees of freedom, and the diagonal and sub-diagonal elements of are generated independently. We then compute the corresponding ordered eigenvalues of , denoted by .
Based on the collected values , a corresponding importance sampling estimate can be computed as in (12) below and the value of the estimate is saved.
The three steps above are repeated at every iteration. After the last iteration, the saved sampling estimates from all iterations are averaged to give an unbiased estimate of .
Now we detail how the importance sampling estimate (12) is computed at every iteration of the algorithm. Let be the measure induced by combining the above two-step sampling procedure. From dumitriu2002matrix , under the change of measure , the density of is
This implies that the density function of under is
Therefore takes the form
The corresponding importance sampling estimate is given by
where is calculated with the sampled based on Eq. (1).
We claim that for the proposed Algorithm 1, with the choice of specified in (9), the importance sampling estimator is asymptotically efficient in estimating the target tail probability. This result is formally stated below and proved in Section 5.
When , the estimator in (12) is an asymptotically efficient estimator of for .
Our discussion regarding asymptotic efficiency focuses on the case of estimating rare-event tail probability , i.e., when corresponds to a rare event. When , is not rare, and we can still apply the importance sampling algorithm with a reasonable positive value as the exponential distribution’s rate. However, the theoretical properties of the importance sampling estimator must then be studied under a different framework; this issue is not pursued here.
We explain the Marchenko–Pastur form of (10). When the entries of have mean 0 and variance 1 ( and ), the Marchenko–Pastur law for the eigenvalues of takes the standard form
with and ; see, e.g., Theorem 3.2 in (paul2014random, ). For the setting considered of this paper, the real case () has , so (10) and (13) are consistent. In contrast, the complex case () has and therefore (10) and (13) are different up to a factor of . Specifically, let and be eigenvalues of when has iid entries of and , respectively. Then we know that and (13) implies the empirical distribution in (10).
We discuss the differences between the proposed method and the method in xu2016rare on the largest eigenvalue, which also employs an importance sampling technique.
First, the two methods have different targets, i.e., in xu2016rare and here, and therefore use different changes of measure to construct efficient importance sampling estimators. As discussed in Section 2, in order to achieve asymptotic efficiency, the change of measures should approximate the target conditional distribution measures, i.e., in xu2016rare and in this paper. Due to the difference between the two conditional distributions, different changes of measure are constructed in the two methods. Specifically, Jiang et al. xu2016rare sample the largest eigenvalue from a truncated exponential distribution depending on the second largest eigenvalue , while the present work samples from an exponential distribution depending on eigenvalues .
Second, the proof techniques of the main asymptotic results in the two papers are also different. In particular, to show the asymptotic efficiency of the importance sampling estimators as defined in (5), we need to derive asymptotic approximations for both the rare-event probability and the second moments of the importance sampling estimator
and the second moments of the importance sampling estimator. Even though the largest eigenvalue and the ratio statistic have similar large deviation approximation results for their tail probabilities, the asymptotic approximations for the second moments of the importance sampling estimators are different due to the differences between the considered changes of measure as well as the effect of the trace term in . Please refer to the proof for more details.
The method and the theoretical results can be easily extended from the case to the case by switching the labels of and and changing to correspondingly. Note that when , the eigenvalues of and give the same test statistic as defined in (1), which is because and have the same set of nonzero eigenvalues and is scale invariant. By symmetry, when , the joint density function of the eigenvalues of have the same form as (4), except that the labels of and are switched. Therefore, the cases when and are equivalent up to the label switching. Note that after is changed to , becomes correspondingly.
3 Numerical study
We conducted numerical studies to evaluate the performance of our algorithm. We first took combinations , , , and and , respectively. Then we compared our algorithm with other methods and present the results in Table 1 and 2.
For the proposed importance sampling estimator, we repeated times and show the estimated probabilities (“
” column) along with the estimated standard deviations of, i.e., (“” column). The ratios between estimated standard deviations and estimates (“” column) reflect the efficiency of the algorithms. Note that with
replications, the standard error of the estimate is.
In addition, three alternative methods were considered, namely the direct Monte Carlo, the Tracy–Widom distribution approximation, and the corrected Tracy–Widom approximation (nadler2011distribution, ). We computed direct Monte Carlo estimates (“” column) with independent replications. We present the standard deviation of direct Monte Carlo estimates (“” column) and the ratios between estimated standard deviations and estimates (“”). In addition, we used the approximation of Tracy–Widom distribution (“” column) specified in Eq. (2). The is computed from the RMTstat package in R. Furthermore, following nadler2011distribution , we computed the Tracy–Widom approximation with correction term (“” column), viz.
We can see from Tables 1 and 2 that the Tracy–Widom distribution (“” column) significantly overestimates the tail probabilities for all considered settings and the finding is consistent with that in nadler2011distribution . Furthermore, the corrected Tracy–Widom approximation (“” column) underestimates the tail probability and goes to a negative number as becomes small.
Since the proposed importance sampling and the direct Monte Carlo method are both unbiased estimators, next we compare their computational efficiency. As discussed in Section 2, for the average estimator , “” and “” can be used as a measure of the computational efficiency in terms of iteration numbers. From the results in Tables 1 and 2, as decreases, “” grows quickly and even becomes not available. In contrast, “” increases slowly and is generally smaller than “”, showing that the proposed importance sampling is more efficient than the direct Monte Carlo method.
As a further illustration, we compared the iteration numbers and that would be needed to achieve the same level of relative standard errors of the estimators. Specifically, in order to have the same ratios of the standard errors to the estimates, i.e., and , obtained under the importance sampling and direct direct Monte Carlo, respectively, we need
Based on the above equation, the simulation results show that to have a similar standard error obtained under the importance sampling, the direct Monte Carlo method needs more iterations as goes small. For example, from Table 1, when , and , we need to be approximately times larger than ; when , and , we need to be about times larger.
Besides the iteration numbers, we compared the average time cost of each iteration under the importance sampling and the direct Monte Carlo method, respectively. For the direct Monte Carlo, two methods were considered in computing the eigenvalues. The first method directly computes the test statistic using the eigen-decomposition of a randomly sampled Wishart matrix. The second method computes the eigenvalues from the tridiagonal representation form as in Step 1 of Algorithm 1. We ran iterations for all the methods and report the average time of one iteration in Table 3, where the first method of the direct Monte Carlo is denoted as , the second method is denoted as , and the importance sampling method is denoted as . The simulation results show that has the highest time cost per iteration, while and are similar.
We further explain the simulation results from the perspective of algorithm complexity. For each iteration, the first direct Monte Carlo method samples a Wishart matrix and performs its eigen-decomposition, whose cost is typically of the order of . The second direct Monte Carlo method and the importance sampling only need to sample number of chi-square random variables and then decompose a symmetric tridiagonal matrix, at a cost of per iteration Demmel:1997:ANL:264989 . Although the importance sampling also samples from an exponential distribution in Step 2, the distribution parameters can be calculated in advance and it does not affect the overall complexity much. Therefore, the time complexity of the algorithm is higher while and are similar per iteration. Together with the result in (15), we can see that the importance sampling is more efficient than the direct Monte Carlo method in terms of both the iteration number and the overall time cost.
To further check the influence of replication number of the importance sampling algorithm, we focus on the case and compare the performance of different s. In order to obtain accurate reference values of the tail probabilities, we used direct Monte Carlo with repeating time to estimate multiple tail probabilities s ranging from to under and , respectively. Then we estimated the corresponding s using our algorithm with , respectively.
Similarly, the line “Importance Sampling with error bar” represents the importance sampling estimates and pointwise confidence intervals, viz.
One can surmise from the figures that the proposed algorithm gives reliable estimates of probabilities as small as with , which is more efficient than directed Monte Carlo and more accurate than Tracy–Widom approximations. Furthermore, Figure 1 shows that the algorithm improves as the number of iterations increases. We also plot the Tracy–Widom approximations in (2) and (14) in Figure 1 for comparison.
Figure 1 shows that without correction, the Tracy–Widom distribution in (2) is not accurate and overestimates the probabilies. The correction term in (14) improves the approximation when the probability is larger than the scale of about , which is consistent with the result in nadler2011distribution . But when the probability gets smaller, the corrected approximation has larger deviation from true values (on the scale) and even becomes negative. Note that since we cannot plot the of negative numbers in the figures, the lines of the corrected Tracy–Widom approximations appear to be shorter. These results validate the results in Table 1 and 2. | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00256.warc.gz | CC-MAIN-2022-05 | 22,493 | 75 |
https://experienceleague.adobe.com/en/docs/experience-manager-screens/user-guide/authoring/product-features/experience-fragments-in-screens | code | Using Experience Fragments using-experience-fragments
This page covers the following topics:
- Using Experience Fragments in AEM Screens
- Propagating Changes to the Page
An Experience Fragment is a group of one or more components including content and layout that can be referenced within pages. Experience fragments can contain any component, such as one or multiple components that can contain anything within a paragraph system that is referenced into the complete experience or requested by a third endpoint.
Using Experience Fragments in AEM Screens using-experience-fragments-in-aem-screens
We.Retail as a demo project from where the Experience Fragment is applied from a Sites page to an AEM Screens project.
As an example, the following workflow demonstrates the use of experience fragments from
We.Retail in Sites. You can choose a web page and use that content in your AEM Screens channel in one of your projects.
Creating a Demo Project with a Channel
Creating a Project
- To create a project, select Create Screens Project.
- Enter the Title as DemoProject.
- Select Save.
A DemoProject is added to your AEM Screens.
Creating a Channel
Navigate to the DemoProject you created and select the Channels folder.
Select Create from the action bar so you can open the wizard.
Choose the Sequence Channel template from the wizard and select Next.
Enter the Title as TestChannel and select Create.
A TestChannel is added to your DemoProject.
Creating an Experience Fragment creating-an-experience-fragment
Follow the steps below to apply the content from
We.Retail to your TestChannel in DemoProject.
Navigate to a Sites page in We.Retail
Navigate to Sites and select
We.Retail> United States > English > Equipment and select this page so you can use this as an Experience Fragment for your Screens channel.
Select Edit from the action bar so you can open the page you want to use as an Experience Fragment for your Screens channel.
Reusing the Content
- Select the fragment that you want to include in your channel.
- Select the last icon from the right so you can open the Convert to Experience Fragment dialog box.
Creating an Experience Fragment
Choose the Action as Create a new Experience Fragment.
Select the Parent path.
Select the Template. Choose the Experience Fragment - Screens Variation template here (value in the field
Enter the Fragment Title as ScreensFragment.
To complete the creation of a new Experience Fragment, select the check mark.
Note: To select an easier option, select the check mark to the right of the field so you can open the selection dialog box.
Creating Live Copy of Experience Fragment
- Navigate to the AEM home page.
- Select Experience Fragments and highlight the ScreensFragment and select Variation as live-copy, as shown in the figure below:
c. Select the ScreensFragment from Create Live Copy wizard and select Next.
d. Enter the Title and Name as Screens.
e. Select Create so you can create the Live Copy.
f. Select Done so you can move back to ScreensFragment page.
note note NOTE After you have created an AEM Screens fragment, you can edit the properties of your fragment. Select the fragment and select Properties from the action bar.
Editing Properties of a Screens Fragment
Navigate to the ScreensFragment (you created in the preceding steps) and select Properties from the action bar.
Select the Offline Config tab, as shown in the figure below.
You can add the Client-side Libraries (Java™ and css) and Static Files to your Experience Fragment.
The following example shows the addition of client-side libraries and the fonts as a part of static files to your Experience Fragment.
Using Experience Fragment as a Component in Screens Channel
Navigate to the Screens channel where you want to use the Screens fragment.
Select the TestChannel and select Edit from the action bar.
Select the components icon from the side tab.
Drag and drop the Experience Fragment to your channel.
e. Select the Experience Fragment component and select the top left (wrench) icon so you can open the Experience Fragment dialog box.
f. Select the Screens live copy of the fragment you created in Step 3 in Path.
f. Select the Screens live copy of the fragment you created in Step 3 in the Experience Fragment.
h. Enter the milliseconds in Duration.
i. Select the Offline Config from the Experience Fragments dialog box so you can define the client-side libraries and the static files.
note note NOTE To add client-side libraries, or the static files in addition to what you configured in step (4), you can add from the Offline Config tab in the Experience Fragment dialog box.
j. Select the check mark so you can complete the process.
Validating the Result validating-the-result
After completion of preceding steps, you can validate your Experience Fragment in ChannelOne by:
- Navigating to the TestChannel.
- Selecting the Preview from the action bar.
View the content from the Sites page (live-copy of the Experience Fragment) in your channel, as shown in the figure below:
Propagating Changes to the Page propagating-changes-from-the-master-page
Live Copy refers to the copy (of the source), maintained by synchronization actions as defined by the rollout configurations.
Because the Experience Fragment you created is a live copy from the Sites pages, and you change that particular fragment from the primary page, you view the changes in your channel. Or, view the destination where you have used the Experience Fragment.
Follow the steps below to propagate changes from the primary channel to your destination channel:
Select the Experience Fragment from the Sites (primary) page and select the pencil icon so you can edit the items in the Experience Fragment.
Select the Experience Fragment and select the wrench icon so you can open the dialog box to edit the images.
The Product Grid dialog box opens.
You can edit any of the images. For example, here the first image is replaced in this fragment.
Select the Experience Fragment and select the Rollout icon so you can propagate changes to the fragment that is used in your channel.
Notice that the changes are rolled out.
Validating the Changes validating-the-changes
Follow the steps below to confirm the changes in your channel:
Navigate to the Screens > Channels > TestChannel.
Select Preview from the action bar.
The following image illustrates the changes in your TestChannel: | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817081.52/warc/CC-MAIN-20240416093441-20240416123441-00538.warc.gz | CC-MAIN-2024-18 | 6,380 | 83 |
https://pypi.org/project/dulwich/0.19.8/ | code | Python Git Library
[![Build Status](https://travis-ci.org/dulwich/dulwich.png?branch=master)](https://travis-ci.org/dulwich/dulwich) [![Windows Build status](https://ci.appveyor.com/api/projects/status/mob7g4vnrfvvoweb?svg=true)](https://ci.appveyor.com/project/jelmer/dulwich/branch/master)
This is the Dulwich project.
It aims to provide an interface to git repos (both local and remote) that doesn’t call out to git directly but instead uses pure Python.
Main website: [www.dulwich.io](https://www.dulwich.io/)
License: Apache License, version 2 or GNU General Public License, version 2 or later.
The project is named after the part of London that Mr. and Mrs. Git live in in the particular Monty Python sketch.
By default, Dulwich’ setup.py will attempt to build and install the optional C extensions. The reason for this is that they significantly improve the performance since some low-level operations that are executed often are much slower in CPython.
If you don’t want to install the C bindings, specify the –pure argument to setup.py:
$ python setup.py --pure install
or if you are installing from pip:
$ pip install dulwich --global-option="--pure"
Note that you can also specify –global-option in a [requirements.txt](https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers) file, e.g. like this:
Dulwich comes with both a lower-level API and higher-level plumbing (“porcelain”).
For example, to use the lower level API to access the commit message of the last commit:
>>> from dulwich.repo import Repo >>> r = Repo('.') >>> r.head() '57fbe010446356833a6ad1600059d80b1e731e15' >>> c = r[r.head()] >>> c <Commit 015fc1267258458901a94d228e39f0a378370466> >>> c.message 'Add note about encoding.\n'
And to print it using porcelain:
>>> from dulwich import porcelain >>> porcelain.log('.', max_entries=1) -------------------------------------------------- commit: 57fbe010446356833a6ad1600059d80b1e731e15 Author: Jelmer Vernooij <[email protected]> Date: Sat Apr 29 2017 23:57:34 +0000
Add note about encoding.
The dulwich documentation can be found in docs/ and [on the web](https://www.dulwich.io/docs/).
The API reference can be generated using pydoctor, by running “make pydoctor”, or [on the web](https://www.dulwich.io/apidocs).
There is a #dulwich IRC channel on the [Freenode](https://www.freenode.net/), and [dulwich-announce](https://groups.google.com/forum/#!forum/dulwich-announce) and [dulwich-discuss](https://groups.google.com/forum/#!forum/dulwich-discuss) mailing lists.
For a full list of contributors, see the git logs or [AUTHORS](AUTHORS).
If you’d like to contribute to Dulwich, see the [CONTRIBUTING](CONTRIBUTING.md) file and [list of open issues](https://github.com/dulwich/dulwich/issues).
Supported versions of Python
At the moment, Dulwich supports (and is tested on) CPython 2.7, 3.4, 3.5, 3.6 and Pypy.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101779.95/warc/CC-MAIN-20231210092457-20231210122457-00633.warc.gz | CC-MAIN-2023-50 | 3,054 | 28 |
https://texasdressing.com/products/boujee-cowgirl-tooled-leather-crossbody | code | Large and spacious crossbody. Easily fits a few journals and a laptop!
Tooled leather design on the front with a cactus in the middle and a turquoise color stone. The strap also has a tooled design. Fringe goes down the front sides. The large compartment zips close and the tooled leather flap goes over it. The back of the bag is plain leather. And of course it’s conceal carry.
model is 5’1
Leather weighs! Unlike fabric bags leather does weight more. Please keep that in mind.
All accessories including handbags and purses are final sale no exchanges or returns on this item. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816587.89/warc/CC-MAIN-20240413083102-20240413113102-00298.warc.gz | CC-MAIN-2024-18 | 582 | 5 |
https://www.ouarzy.com/the-software-crafting-paradox | code | I was recently pushed out of my comfort zone by some young software dev. She pointed out that talking about code as a craft, like in software crafting, is a way to be a gatekeeper in tech. This is the kind of remark I like a lot, because it firmly challenged some strong beliefs deeply root in me. Thinking about software crafting hurting the software industry is like thinking that being a vegan or a vegetarian could be bad for the planet for me. It just doesn’t make sense in my understanding of the tech world.
I voluntary took a few months to think about it because I didn’t want to react to quickly to that.
What is software crafting ?
In a few words, software crafting is the re appropriation of an agile mindset by the tech community.
In reaction to an agile community less and less interested in technique, and more and more interested in certification to sell to management in big corporation, software crafting (originally software craftsmanship) has emerged as a way to promote good practices in tech. Long story short, the point was to defend the practices that you can’t bypass if your goal is to build an agile company based on an agile IT dev team. It is something really important for software developers because they are usually the first to suffer under the pressure of deadlines and requirements created by non-developers. The classical anti pattern is when Agile means “cool we can change our mind about what to do every two days” without accepting the technique counterpart which is usually harder to implement than the classic waterfall process (I’m talking about unit testing, continuous deployment or at least integration and emergent design for instance).
How do I discover it ?
I learn about if firstly with books (the main one being software craftsman by Sandro Mancuso). I recognized myself in so many aspects:
– the feeling that there is something wrong in the traditional way to produce software
– the feeling those agile gurus are often far from the operational place
– the will to progress during out whole career, even after decades
– the will to help other people, especially newcomers
– the humility leading to the ability of always questioning oneself
Meeting some people worldwide that I consider software crafters in events like Socrates and other conferences and meetup has confirm to me that I wanted to be part of it. These people were always very welcoming, very humble and very keen to new people, whatever their previous knowledge and experience. Software crafting and its community was, for me, one of the main engines allowing to continuously improved and learn in more than a decade as a software developer. I also had the opportunity to welcome and help other people in this community and think that it helps them to improve as well. Hence, I had no doubt so far that this movement is strongly positive and can radically improve the daily life of most software developers.
How the next generation is discovering it ?
Of course, I can’t speak for them, but here is my guess. They’re struggling to implement some feature using the last shiny framework in the last shiny language, and here someone (usually a consultant, usually a white dude over thirty) arrives and basically tell them that they are doing shit. They know nothing, they should have test drive their design and implement continuous integration infrastructure, and why the hell no one seems to know or care about SOLID principles? And this consultant might wrap all of it by something like “I’m a software crafter, and you’re not”. Either she tells it explicitly or not, that is what you can feel around these toxic people.
Where does it fails?
I can understand when you need to have kind of an extreme posture in a team, when you want to change things. It isn’t acceptable though if you lack the ability to help others and/or if you have a lack of humility.
Questioning oneself means that each time we arrive in a team, we should challenge what best practices can be in this context. Because yes, TDD, SOLID and hexagonal architecture aren’t silver bullets. They’re just tools that might or might not be suitable to a problem.
If you see yourself as an elite mastering complex software technique with an evangelisation mission around it, you totally miss the point of software crafting.
Can it be saved ?
This kind of problem is also a consequence of something going more and more mainstream, like Agile. Lots of people claim to be part of the movement, only a few actually understand what it means.
Unfortunately those who do lots of noise are usually the most visible, and if these people are indeed elitist or unable to question oneself, it may discredit the whole movement.
The software crafting paradox is that it requires lots of humility, hence the most visible people are most likely not the best crafters you can met.
So how do you recognize a good crafter? It’s simple, she’s looking for way to improve, she’s always happy to help other people, especially newcomers, she’s humble, and she’s always questioning oneself.
Nothing technical here, just a way of being and working that usually leads you to mastering in your expertise domain.
Software crafting isn’t about code, it’s about your behaviour as a software professional. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00685.warc.gz | CC-MAIN-2023-14 | 5,302 | 26 |
http://lists.tako.de/Olympus-OM/2013-02/msg00051.html | code | OK, Folks. It's time. Time for a TOPE2 Event Launch. I will be
deleting the test gallery and putting up a new Event. Time to decide
what our first one is going to be. There were rumblings about 2013
first photo, but that was kinda an ad-hoc thing. Maybe not so good for
I would like it to be something that everybody can get into the spirit
of it and participate.
PLEASE! Not cats!
Themed Olympus Photo Exhibition: http://www.tope.nl/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484648.28/warc/CC-MAIN-20190218033722-20190218055722-00104.warc.gz | CC-MAIN-2019-09 | 434 | 8 |
https://richardwiseman.wordpress.com/2010/11/12/its-the-friday-puzzle-85/ | code | First, next Friday I am speaking at a special conference for A-level students on science and pseudoscience. If you are a teacher or a student or even a normal person, and interested in coming along, the details are here.
Second, here is the Friday puzzle! A simple one this week, can you make the following equation correct by moving just one number…..
62 – 63 = 1
As ever, please do NOT post your answer, but do say if you think you have solved it and how long it took. Solution on Monday! | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00545.warc.gz | CC-MAIN-2022-21 | 494 | 4 |
https://github.com/MediaArea/RAWcooked | code | RAWcooked encodes RAW audio-visual data into the Matroska container (MKV), using the video codec FFV1 for the image and audio codec FLAC for the sound.
RAWcooked is ready for production. As with any other tool, keep doing basic checks e.g. testing reversibility with your own files.
Information on this project, including builds for different platforms, can be found at https://mediaarea.net/RAWcooked. | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746061.83/warc/CC-MAIN-20181119171420-20181119193420-00392.warc.gz | CC-MAIN-2018-47 | 402 | 3 |
http://thenokiareview.com/2010/02/26/shadow-mapping-augmented-reality-interaction-demo-on-the-nokia-n900/ | code | Earlier this year we brought you a cool demonstration of Augmented Reality on the Nokia N900. Augmented Reality is a technology that overlays computer generated information over a users view of the real world. Today we would like to update you on this project with two more, very exciting videos, showcasing shadow mapping and virtual interaction using the Nokia N900. The guys over at Rojtberg.net are really making some head way with this project, and although it may be some time before the application is available through the ‘maemo-extras’ repository, it surely does deserve some more attention. Videos after the break.
Tags | Applications, Augmented Reality, Camera, Connectivity, Demo, Maemo 5, N900, Nokia, Software, Touch UI, Video | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118950.30/warc/CC-MAIN-20170423031158-00603-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 745 | 2 |
https://community.spiceworks.com/topic/306652-windows-photo-viewer-cannot-open-this-picture-because-the-file-appears-to-be | code | ...damaged corrupt or is too large.
A company has emailed us .TIF files in a ZIP folder. After putting the password in to open the files it comes up with the error message. I have listed above. I have tried a hotfix on the Microsoft Support website and I have checked the existence and properties of the .tmp and .temp folders. Does anyone have any suggestions?
Have you tried opening it using Microsoft Office Picture Manager?
Can we assume you have extracted the image from the ZIP before you tried to open it? or have you opened the ZIP and then opened the TIF?
Thank you. I'm not sure why I didn't think of that in the first place. I have changed the default program for .TIF & .TIFF files to Microsoft Office Picture Manager and it has opened the files without any issues. Thank you again. | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125938462.12/warc/CC-MAIN-20180420135859-20180420155859-00135.warc.gz | CC-MAIN-2018-17 | 794 | 5 |
https://pinsym.wordpress.com/2014/06/13/activeageingtalk/ | code | I gave a talk at this Hackathon in Aug 2013, titled ‘3 Tips for Designing User Experiences for the Elderly. It was late on a Friday evening, and I think people were tired from a long work week. I shortened it because most of the age-related changes topics I wanted to cover were already covered by another Speaker. So I focused on the three tips, and hopefully left the audience with a reminder that what this target audience wants the most, is to just ‘Keep on Keeping On.’
Ageless Online hosts the full-text of the talk here, with relevant slides. | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863830.1/warc/CC-MAIN-20180620163310-20180620183310-00215.warc.gz | CC-MAIN-2018-26 | 555 | 2 |
https://freewaregenius.com/how-to-quickly-un-frame-a-site-in-internet-explorer/ | code | Have you ever been surfing the net with IE and ended up on a page that is stuck in a frame from another site? Typically in these instances the URL being shown is that of the framing website, with a frames scrollbar surrounding the page whose content you are viewing (which is actually the page you are interested in).
In some instances framing sites provide a way for the user to click out of the frame and onto the ’real page’, but in many cases they do not, which can be really annoying. Wouldn’t be great if there was a single simple button you could push to ’de-frame’ a site?
Well, there is. Here’s how to do it with an (unfortunately named) free program called IECrap.
- Download IECrap from the developer’s home page (only 24K)
- Run the installer and select “Zoom Frame”; you can deselect the other components (see screenshot).
- That’s it. The next time you’re browsing a framed website, all you have to do is select “Zoom Frame” from IE’s right click context menu and your webpage will be instantly de-framed (see first screenshot above)
You might also want to check out the other 2 components. “Window Sizer” installs a dropdown that can be used to quickly change the size of the IE window. “Debug Box” is a debugging tool for developers which I didn’t really look into very much.
Note on compatibility: I am using this with IE7 no problem, even when the compatibility blurb on the website states “(IE 4.0->6.0…)”.
Go to the IECrap developer’s page to get the latest version (approx 24K). | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100172.28/warc/CC-MAIN-20231130062948-20231130092948-00010.warc.gz | CC-MAIN-2023-50 | 1,545 | 9 |
https://mail.gnome.org/archives/gtk-app-devel-list/2000-November/msg00076.html | code | finding the position of a (just created) widget
- From: Julian Bradfield <jcb+gtk dcs ed ac uk>
- To: gtk-app-devel-list gnome org
- Subject: finding the position of a (just created) widget
- Date: Tue, 7 Nov 2000 11:51:39 +0000 (GMT)
I want to show a widget, several layers down in a hierarchy
of containers, and then know immediately where it is, thus:
gtk_widget_show(w); /* w is a previously unshown child of a box */
/* here I want to know its position */
How can I do this?
The only thing I can think of so far is to iterate through the main
loop until I find some sensible values in the allocation record, but
that is unclean and requires me to disable all my other event handlers
that I don't want dispatched at that time.
Is there a clean way to get a synchronous resize calculation done?
] [Thread Prev | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123635.74/warc/CC-MAIN-20170423031203-00480-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 812 | 16 |
https://www.garron.me/en/blog/secure-strong-password.html | code | On Secure Passwords
A lot of words have been written about how to create secure passwords, and how to store them (or not to store them at all).
And still, people choose weak passwords all the time, a lot of them even choose weak security questions. So, I think some more can be yet written about the matter.
When you choose your password, you can make it with only numbers, only lowercase letters, or mix them up with symbols and uppercase letters. This decisions will determine the alphabet size you are using.
For example, if you use only numbers, you will have an "alphabet" size of 10, as there are only 10 numbers, which you can combine to create all passwords made out of numbers. If you have chosen a password made out of 6 numbers, let us say 345098. This password needs to combine ten digits (0,1…9) in six spaces to guess it. Which means 1,111,110 combinations. Looks like a big number but a computer can do this really fast.
If we change the first number for a lowercase letter and make the password looks like this a45098. We have now added 26 digits to our "alphabet", now the guesser needs to combine 36 digits in six spaces to guess our password. Suddenly the possible combinations goes from 1,111,110 to 2,238,976,116 that is 2000+ more combinations. What if we add an uppercase letter? aB5098 57,731,386,986 combinations.
I am sure you understood how important is to use numbers, uppercase letters, lowercase letters and symbols.
Length of password
Go back to our original password 123098, now instead of adding more sets of characters we will add more spaces to the password, in other words we will keep it as an only-numbers password but instead of six digits we will make it seven digits long. New password 1234098, now the possible combinations are: 11,111,110. Adding spaces also increase the number of possibilities and make it harder to break a password.
Choosing a good password
Now that we know that there are two parameters to play with while creating password, and understand how they work, we need to find a good algorithm to create and remember password.
What I usually try to do is to pick a sentence which is familiar to you. Suppose you call your girlfriend "My blue bear" -I have never done that- and her name is Tina. You can create a password like this one: tinamybluebear. This one is very hard for a computer to guess, but not for your cousin (so to say). We need to add some more things.
- Add uppercase letters: TinaMyBlueBear
- Add numbers: 0TinaMyBlueBear0
- Add symbols: .0TinaMyBlueBear0.
Now that password is easy to remember for you, hard to guess for your relatives and friends, and really hard for computers, it is not in any dictionary, and if the computer start guessing and try to brute-force break it, it will have to try with 4.01 x 1035 possible guesses. That is the number 4 and 35 zeros. It is a huge number.
That password is 18 chars long, and has an Alphabet of 95 chars. Its entropy is 90 bits.
Another way is to use symbols between the words, you have to be creative.
Do not repeat passwords
It does not matter how hard you work in creating your password, if you use the same one for every service you use in the Internet, it lost all its strength.
If somehow the password database of that small site you once tried and never log it again is stolen, the person who have it, will now have your login/password pair, and if he tries with it on Twitter or Facebook or Gmail, will he have access to your account? If the answer is yes, you are screwed.
So, create good passwords for every important site you sign-in you can have weak simple password for every other sites you do not care that much. For banks and Paypal use unique passwords.
Example for individual passwords
Facebook is your father "–John.Scott.Smith00" for Twitter you mother "–Tania.Lynn.Smith00", Gmail your Grandpa "–Oliver.Brandon.Balboa00" There you have symbols, letters in uppercase and lowercase and numbers, in an easy pattern to remember, and names you can easily remember. You can even write them down, using some kind of "secret code"
- Dad facebook http://www.facebook.com/js-smith
- Mom Twitter http://twitter.com/tsmith
- Grandpa's email: [email protected]
You now know that for Facebook you have to use your father's name in your pattern, for twitter your mother's name and for Gmail your grandpa's name. Any other person who find that will believe you were just taking notes of your relatives accounts.
To create strong password you have to try to use at least one lowercase, one uppercase letter, one number and one symbol. With that in mind make it as long as you can keeping in mind that you should remember it.
Finally, other option is to use Passworcard service, take a look at it. | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00046.warc.gz | CC-MAIN-2022-33 | 4,733 | 30 |
https://themes-dl.com/nulled-nexopos-4-x-pos-crm-inventory-manager-free-download/ | code | NexoPOS 4.x – POS, CRM & Inventory Manager-[Clean-Nulled].zip
NexoPOS 4.x is a modern point of sale application that comes with a CRM and Inventory manager included. It can be used by small and large companies that want to ease inventory management as long as their incomes through sales. Same as NexoPOS 3.x, this new major update can be extended by the modules.
Compatibility With NexoPOS 3.x Modules
NexoPOS 4.x is a complete rebuild. None of NexoPOS 3.x are compatible with NexoPOS 4.x
= v4.2.3 - 2021.05.07 * Added : return exact string for unmatched translation * Fixed : create symbolic link on windows for language * Fixed : flat array with keys * Added : new hook when crud form is loaded * Added : new filter when crud footer form is rendered * Added : new filter when settings footer is rendered * Fixed : missing translation file on Vue component * Update : CKEditor. * Added : 2 languages (French & Spanish) * Fixed : ensure localization is correctly loaded * Added : new path to extract classes. = v4.2.2 - 2021.05.04 * Added : extraction tools * Added : language base files * Added : default system language * Fixed : missing localization functions * Fixed : missed attribute while setting up. * Fixed : ensure the store language is selected by default. * Update : updating dependencies * Fixed : error with module missing languages = v4.2.1 - 2021.05.07 * Added : download to CSV feature. * Added : away to install NexoPOS from the CLI * Added : upload user profile * Added : isolated media manager * Added : quick setup command * Added : hard reset command * added : new settings * Update tailwindcss * ignoring expenses settings * Added : Module details command * Adding : databased for testing. * Added : new logo * Added : new recovery feature (including mails) * Added CI test suite configuration file. * Fixed : CSRF Mismatch one new installation using a domain with port * Fixed : required parameter * Added : new commands to handle modules * Added : register modules command. * Added : process status editable. * Fixed : profile avatar overflow * Added : new attribute to Order model. * Fixed : Unknown process status. * Added : localization support on labels.ts * Fixed : number length * Added : new animation * Created : various tests * Fixed : unit group deleted with a product * Added : category reference for each ordered product * Fixed : tax group is computed only if it's provided * Moved Gastro Test to the module * Fixed : allow snackbar issue * Added : new command to forget module migration * Added : new method for calculating a value percentage * Added : websocket support * Updated : notification to support WebSocket * Adjusting broadcast name * Fixed : ensure keep in sync on client-side * Fixed : date format * Fixed : default sidebar state * Securing private access * Improving comments * Added : new middleware * Added : pusher to composer.json * Fixed : broadcasting configurations * Improved : unit testing * Updated : order date can be modified * Updated : better error message when a tax group is not found. * Sorted : proper message when there is a misconfiguration on the taxes * exposing some POS queues * fixing responsivity for cart buttons * Fixed issue with SetupService. * Fixed : sale price set if the tax is not provided * Added : new crud component to edit a product expiration. * Fixed : date time picker description * Added : ensuring products with accurate tracking can be adjusted * Added : label on crud table to see the items displayed and total items available * Fixed : missing column = v4.0.0 * Initial Release
Mercifully Note: We refresh new substance like WordPress Themes,
Plugins, PHP Scripts ordinary. In any case, recall that you
ought to never utilize this things in a business site.
Every one of the substance posted here for advancement
and testing reason as it were. We’re not in charge of
any harm, use at your own particular RISK! We highly recommend to buy NexoPOS 4.x – POS, CRM & Inventory Manager from the The Developer ( blair_jersyer ) website. Thank you. | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504969.64/warc/CC-MAIN-20210622002655-20210622032655-00634.warc.gz | CC-MAIN-2021-25 | 4,047 | 11 |
https://www.amks.me/blog/tdi | code | tl;dr — Machine Learning research is booming right now. SOTA records are broken every day, and some of the world’s smartest people are working to advance the field. Machine learning research is also opaque, inaccessible, and filled with unwritten institutional knowledge. The Daily Ink is my attempt to change this. The latest in AI/ML research, explained in simple language, in your inbox every Monday/Wednesday/Friday. Check out the posts from the last couple of months, and subscribe if this sounds up your alley :)
Why Start Writing?#
I think Philip J Guo’s “The Ph.D. Grind” describes the process of getting into research very well. Imagine: Your advisor has been in this playground for multiple decades, you’ve just been tossed in, and all you can do is read a lot and try to stay afloat. Much like philosophy, all research references itself. Going back to the fundamentals— the Platos of ML (that’s probably the classic LeNet or MLP papers) feel archaic and hard to place in context. Modern research feels extremely tangled up. It’s very hard to start! I was thankful to have mentors who held my hand through this process.
But not everyone has the fortune to go to a world-class research institution, make friends with kind and welcoming graduate students, or have the time to invest in struggling with reading research until it clicks. This does not mean they shouldn’t have access to it, however — because research might seem like a scary beast, but some of the most complex concepts are quite elegant and simple at their core. I think everybody even mildly interested in ML should have access to understanding this core.
ML Research is overwhelming in quantity#
Here’s something crazy: more than 10,000 papers are published in ML every year. That’s roughly 30 per day. A normal human with a full-time job has no hope of keeping up with this velocity. Part of my mission with the Daily Ink is to be a filter in the noise of research, similar to @_akhaliq and @DAIRInstitute on Twitter. The Daily Ink will cover the latest hottest papers (pre-print, conferences, and otherwise) as well as some of the classics of NLP/ML once in a while.
ML Research is often alchemy#
One of my professors at Berkeley once likened ML to alchemy. I find the analogy quite appropriate. A lot of times, things in ML are empirical and “just work”, even if we don’t have an underlying theory for why. Understanding these trends and heuristics is distinct from understanding the papers — you can do the latter without the former. Through writing about papers, I intend to introduce people to the “insider knowledge” that is often brushed away in research papers.
ML Research is the hottest thing on the fucking planet#
There are a lot of unsolved problems in ML and LLMs right now. It is exciting and enthralling to see small improvements make massive downstream effects. For example, a reduction in I/O cost led to the first Language Model breakthrough in extending context length. The more equitable we make machine learning, the more people can solve these problems, and the better we can make the models we are creating. Making cutting-edge ML accessible is essential to the future of ML.
I’ve always loved teaching — I did a lot of it at Berkeley, and I intend to keep doing a lot more. Beyond the paper reviews, here’s what you should expect in the next few months: 1. Deep dives into fundamental concepts like embeddings or feature engineering 2. Tutorials to get started with making your own models 3. Other people!!
It’s a really exciting time to be in ML. I hope you’ll consider joining me on this journey. I’ll see you the next odd day of the week, as we get ever closer to creating AGI together ;) | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100146.5/warc/CC-MAIN-20231129204528-20231129234528-00572.warc.gz | CC-MAIN-2023-50 | 3,743 | 12 |
https://docs.oracle.com/cd/E19225-01/820-5822/byauh/index.html | code | To enable the Console, File, JDBC, or Scripted audit publishers, follow the steps in To Enable Custom Audit Publishers. Select the appropriate publisher type from the New Publisher drop-down menu.
Complete the Configure New Audit Publisher form. If you have questions about the form, refer to the i-Helps and online Help.
The Console audit publisher prints audit events to either standard out or to standard error.
The File audit publisher writes audit events to a flat file.
The JDBC audit publisher records audit events in a JDBC datastore. | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645248.22/warc/CC-MAIN-20180317155348-20180317175348-00114.warc.gz | CC-MAIN-2018-13 | 542 | 5 |