The 'scrap_img_by_id' function didn't return any longer anything useful. This
fix allows the google images engine to present the full source image instead of
only the thumbnail.
The function scrap_img_by_id() is rpelaced by a fully rewrite to parse image
URLs by a regular expression. The new function parse_urls_img_from_js(dom)
returns a mapping of data-id to image URL.
Closes: https://github.com/searxng/searxng/issues/909
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
This commit sets appropriate height of the (embedded) player from:
- soundcloud
- mixcloud
- deezer
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
This is a rewrite of the hostname_replace.py that:
- don't stop to replace URL in fields ('data_src', 'audio_src') if there isn't a
'parsed_url',
- adds a comment about keep or remove a result from the result list
- adds a loop over ['data_src', 'audio_src'] instead of doubling code lines
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Embedded HTML breaks SearXNG architecture. To modularize, HTML is generated in
the templates (oscar & simple) and result parameter 'embedded' is replaced by
'data_src' (and 'audio_src'), an URL for embedded content (<iframe>).
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
To test you need to redirect embeded videos (e.g.) from youtube to a invidios
instance. Search for videos using engine `!youtube lebowski`. The result URLs
and the embeded videos should link to the invidios instance.
Here is an example of such a `hostname_replace` configuration::
hostname_replace:
# youtube --> Invidious
'(.*\.)?youtube-nocookie\.com': 'invidio.xamh.de'
'(.*\.)?youtube\.com$': 'invidio.xamh.de'
'(.*\.)?invidious\.snopyta\.org$': 'invidio.xamh.de'
'(.*\.)?vid\.puffyan\.us': 'invidio.xamh.de'
'(.*\.)?invidious\.kavin\.rocks$': 'invidio.xamh.de'
'(.*\.)?inv\.riverside\.rocks$': 'invidio.xamh.de'
Closes: https://github.com/searxng/searxng/issues/873
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Embedded HTML breaks SearXNG architecture. To modularize, HTML is generated in
the templates (oscar & simple) and result parameter 'embedded' is replaced by
'data_src', an URL for embedded content (<iframe>).
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Openstreatmap images are now loaded from uploads.wikimedia.org instead of
commons.wikimedia.org to prevent redirects.
With `image_proxy` enabled images from commons.wikimedia.org cant be loaded
since they are redirected. We already discussed this issue [875] and
@tiekoetter fixed this issue in PR [878].
Related-to:
- [875] https://github.com/searxng/searxng/issues/875
- [878] https://github.com/searxng/searxng/pull/878
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Wikidata info box images are now loaded from uploads.wikimedia.org instead of commons.wikimedia.org to prevent redirects
Co-authored-by: Markus Heiser <markus.heiser@darmarit.de>
Two different threads ( = two different user queries) can call the request
function in a row and then the response function. The namespace will be same
since this is the same engine.
To keep exactly the same value ``base_url`` must be stored in params and then
retrieve using ``resp.search_params["base_url"]``.
Suggested-by: @dalf https://github.com/searxng/searxng/pull/862#discussion_r799324861
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Two different threads ( = two different user queries) can call the request
function in a row and then the response function. The namespace will be same
since this is the same engine.
To keep exactly the same value ``base_url`` must be stored in params and then
retrieve using ``resp.search_params["base_url"]``.
Suggested-by: @dalf https://github.com/searxng/searxng/pull/862#discussion_r799324861
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
can replace filtron:
* rate limite the number of request per IP and per (IP, User-Agent)
* block some bots
use Redis
data stored in Redis never contains the IP addresses, only HMAC using the secret_key
Co-authored-by: Markus Heiser <markus.heiser@darmarit.de>
Now that about.html extends page_with_header.html
it already has a link to the start page and removing
the link makes it easier to extract the page title
from the Markdown for the following commit.
Currency engine has DuckDuckGo metadata
In the engine selector of the preferences window, the currency search engine has
the same metadata and wikidata url as duckduckgo, I'd assume there should be a
difference of some sort there clarifying what source the currency uses or, if
it's a duckduckgo service, at least clarifying that it's a currency service by
duck duck go.
Closes: https://github.com/searxng/searxng/issues/787
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Previously the preferences & stats templates contained the markup:
<a href="{{ url_for('index') }}"><h1><span>SearXNG</span></h1></a>
There are many things wrong with this:
1. the markup was duplicated
2. the CSS needed to be changed whenever a new page wanted to use this
header (since the CSS used page-specific selectors)
3. h1 should be reserved for the actual page title
(e.g. Preferences or Engine stats)
4. the image was set via CSS which also set:
span { visibility: hidden; }
which however removes the alternative text from the accessibility
tree (meaning screen readers will ignore it).
This commit fixes all these problems.
Other optional parameter ..
`&sort=crawl_date`
can be appended to search_string to sort results by date.
`&domain=example.org`
can be implemented to search_string to get results from just one domain.
Public instances could get relatively fast timed-out for 3600s.
--
Merged from @allendema's commit [1] and slightly modfied / see [2].
Related-to: [1] 455b2b4460
Related-to: [2] https://github.com/searx/searx/pull/3040
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>